Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 Rhomber joined #gluster
00:07 vpshastry joined #gluster
00:10 thisisdave joined #gluster
00:16 thisisdave hi folks, a bit of a newcomer to glusterfs, and ran into an issue just now. Tried to remove a brick from a repliacted volume, and wound up with a slew of duplicate files. I'm wondering where I went wrong. Command I issued was `sudo gluster volume remove-brick ClusterHome replica 1 IB-orange1:/clusterhome start`
00:17 thisisdave fwiw, it's one of two bricks. reason for removal was to re-slice the zfs zpool into stripes as its a mirrored pool presently and thusly an I/O bottleneck.
00:17 sysconfig joined #gluster
00:21 JoeJulian thisisdave: So you had 2 bricks in a replica 2, removed one which should have left you with just one single brick, right?
00:22 thisisdave Correct.
00:22 JoeJulian ~pasteinfo | thisisdave
00:22 glusterbot thisisdave: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
00:23 thisisdave http://fpaste.org/13312/13690957/
00:23 glusterbot Title: #13312 Fedora Project Pastebin (at fpaste.org)
00:24 JoeJulian eww
00:24 JoeJulian What version is this?
00:24 thisisdave prior to the brick removal, it was like this: http://fpaste.org/13313/13690958/
00:24 glusterbot Title: #13313 Fedora Project Pastebin (at fpaste.org)
00:24 thisisdave 3.3.1
00:25 thisisdave ClusterHome is the only volume I care about; nonrep was for testing.
00:25 JoeJulian Go ahead and commit the brick removal. You may have to add the word force to make it actually happen.
00:26 thisisdave I stopped the removal after a cluster user alerted me of the dupes. I should still commit, correct?
00:26 JoeJulian yes
00:26 thisisdave still with "replica 1" (asking for safety's sake), yes?
00:27 JoeJulian The duplicates are due to both replicas still being part of a distribute-only volume. This looks like a bug that should be fairly easy to duplicate.
00:27 JoeJulian The "replica 1" has already taken effect, so it doesn't matter either way.
00:27 thisisdave STDOUT: replica count (1) option given for non replicate volume ClusterHome
00:27 thisisdave this is what the force will take care of I assume
00:28 JoeJulian Meh, leave it off then.
00:30 m0zes looks like an ugly (from a users' perspective) bug.
00:30 sysconfig left #gluster
00:30 thisisdave tried to force the brick removal and I'm still getting "replica count (1) option given for non replicate volume ClusterHome" ...does this mean I should have "replica 0" instead?
00:31 DEac- joined #gluster
00:32 JoeJulian just leave off the "replica N"
00:33 JoeJulian ... have to see if this can be duplicated in 3.3.2 and/or 3.4.0
00:37 yinyin joined #gluster
00:40 Rhomber joined #gluster
00:48 Rhomber joined #gluster
00:54 Rhomber joined #gluster
00:55 portante joined #gluster
01:08 Rhomber joined #gluster
01:29 aliguori joined #gluster
01:32 majeff joined #gluster
01:34 Rhomber joined #gluster
01:34 lnxsix joined #gluster
01:40 majeff joined #gluster
02:04 thisisdave @JoeJulian Thanks for your help earlier. Things worked out, the duplicates disappeared. I resliced the zpool on the other brick and added the brick *forgetting* the replicate 2 command Removed the brick quickly enough, and now I can't re-add due to "already part of volume" error. Followed setxattr info I found to no avail.
02:07 flrichar joined #gluster
02:09 m0zes already part of a volume
02:10 thisisdave yet gluster volume info indicates that there's only 1 brick...
02:10 m0zes thisisdave: this didn't help? http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
02:10 glusterbot <http://goo.gl/YUzrh> (at joejulian.name)
02:10 m0zes including the restart of glusterd?
02:11 thisisdave nope. one of the first links I saw, among others that indicated the same
02:12 m0zes since this is an empty brick, remove the slice (and mountpoint) and recreate it.
02:14 chirino joined #gluster
02:26 Shdwdrgn joined #gluster
02:29 thisisdave @m0zes apologies for my lack of understanding; can you dumb it down a notch or two?
02:29 vpshastry joined #gluster
02:30 majeff joined #gluster
02:31 m0zes thisisdave: you're using zfs, I was just suggesting deleting the volume in the zpool on the *bad* server, removing the mountpoint directory for the brick and re-creating both the mountpoint and the volume in the zpool.
02:31 zaitcev joined #gluster
02:32 thisisdave @m0zes got it, thanks. Perhaps that'll clear up the mystery of why `gluster volume status` is _still_ showing the NFS server on the other brick...
02:32 m0zes if this were any other filesystem I'd say umount, mkfs the brick. delete the brick mountpoint, recreate the brick mountpoint and remount the brick. that should remove all residual traces of the old extended attributes.
02:36 thisisdave @m0zes that did the trick. many thanks.
02:36 m0zes np :)
02:45 flrichar joined #gluster
02:55 bharata joined #gluster
03:00 vshankar joined #gluster
03:02 sgowda joined #gluster
03:11 lalatenduM joined #gluster
03:18 badone joined #gluster
03:26 vpshastry joined #gluster
03:28 vpshastry left #gluster
03:32 Shdwdrgn joined #gluster
03:33 majeff joined #gluster
03:39 majeff joined #gluster
03:43 wgao__ hi all, what's the common reasons for host install failed on ovrit ??
03:53 vpshastry joined #gluster
03:54 wgao__ left #gluster
03:54 wgao joined #gluster
03:59 wgao joined #gluster
04:01 kshlm joined #gluster
04:02 wgao joined #gluster
04:03 anands joined #gluster
04:06 clutchk joined #gluster
04:22 shylesh joined #gluster
04:26 shylesh joined #gluster
04:35 vshankar joined #gluster
04:37 aravindavk joined #gluster
04:41 shylesh joined #gluster
04:52 hagarth joined #gluster
05:16 saurabh joined #gluster
05:17 majeff joined #gluster
05:18 sgowda joined #gluster
05:21 mohankumar joined #gluster
05:24 lanning joined #gluster
05:28 kshlm joined #gluster
05:31 zhashuyu joined #gluster
05:33 satheesh joined #gluster
05:35 rastar joined #gluster
05:46 bulde joined #gluster
05:50 kshlm joined #gluster
05:52 majeff left #gluster
05:57 vpshastry joined #gluster
06:00 guigui1 joined #gluster
06:03 sgowda joined #gluster
06:05 isomorphic joined #gluster
06:07 ricky-ticky joined #gluster
06:09 balunasj joined #gluster
06:09 bala joined #gluster
06:15 anands joined #gluster
06:17 jtux joined #gluster
06:28 rgustafs joined #gluster
06:32 dobber joined #gluster
06:47 cyberbootje joined #gluster
06:51 Guest79483 joined #gluster
06:53 anands joined #gluster
06:54 ngoswami joined #gluster
06:54 rotbeard joined #gluster
06:59 ekuric joined #gluster
07:01 vimal joined #gluster
07:06 ctria joined #gluster
07:06 venkatesh joined #gluster
07:14 arusso joined #gluster
07:14 arusso joined #gluster
07:15 Rorik joined #gluster
07:15 thekev joined #gluster
07:15 spider_fingers joined #gluster
07:15 JordanHackworth joined #gluster
07:18 masterzen joined #gluster
07:18 badone joined #gluster
07:22 anands joined #gluster
07:23 sgowda joined #gluster
07:25 tjikkun_work joined #gluster
07:27 hybrid512 joined #gluster
07:35 satheesh joined #gluster
07:52 jclift joined #gluster
07:58 jtux joined #gluster
08:08 ollivera joined #gluster
08:10 jbrooks joined #gluster
08:16 rb2k joined #gluster
08:19 pull joined #gluster
08:20 Norky joined #gluster
08:25 hybrid512 joined #gluster
08:27 hybrid5121 joined #gluster
08:35 puebele1 joined #gluster
08:35 atrius_ joined #gluster
08:36 Guest79483 joined #gluster
08:37 tziOm joined #gluster
08:55 tziOm I am having problems with gluster (rep3, distributed) diving me a seemingly random delay of 1-6 seconds when reading files
09:00 venkatesh joined #gluster
09:05 mindbender joined #gluster
09:06 mindbender hi, I've been told that there's been some xfs troubleshooting around for glusterfs
09:06 mindbender I was wondering if anyone could help
09:06 mindbender I'm testing xfs on centos5 vs centos6 and it appears quite slow on centos6 (6 times)
09:06 mindbender here is my test results, http://pastebin.com/3sDRF8xk if anyone has an idea
09:06 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
09:07 mindbender http://www.fpaste.org/13361/36912724/
09:07 glusterbot Title: #13361 Fedora Project Pastebin (at www.fpaste.org)
09:10 Staples84 joined #gluster
09:19 venkatesh joined #gluster
09:21 Elendrys joined #gluster
09:21 Elendrys Hi, I need some help with a healing issue on a 2 bricks replicated volume
09:22 tjikkun joined #gluster
09:22 tjikkun joined #gluster
09:30 tshm Then you had better ask more specifically. What's your problem?
09:32 Elendrys I have a bunch of errors in the glusterfshd.log about self-healing process
09:33 Elendrys it looks like there are a lot of gid links in the .glusterfs directory but the original file is missing
09:33 Elendrys I send you a part of log file
09:35 Elendrys The status of the volume is displayed as ok but when i request the self-heal status it displays a lot of errors
09:35 tshm @Elendrys: Try pastebin or similar instead, so that anybody can help you out. I don't know what's up with that, unfortunately.
09:35 Elendrys Ok
09:35 Elendrys Thanks
09:36 tshm The more people who can see your errors, the more people can potentially help you ;-)
09:36 tshm Sorry I can't
09:37 Elendrys Her is a pastebin link : http://pastebin.com/089yDTyj
09:37 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
09:38 Elendrys http://fpaste.org/13370/12911313/
09:38 glusterbot Title: #13370 Fedora Project Pastebin (at fpaste.org)
09:45 jclift Elendrys: As a thought, it's probably worth asking on the mailing list too (and attaching the log file).  That way if no-one is able to help you right away, then people who might see it when you're away can still respond.
09:45 ricky-ticky joined #gluster
09:46 jclift mindbender: Interesting... that seems like double the amount of time needed for the copying
09:47 jclift mindbender: It might be worth asking on the gluster mailing lists if anyone is aware of a cause/reason for that kind of thing
09:47 mindbender jclift: yeah, i'm replicating a backup server situation as our backup server takes 8-10 hours to remove a daily folder so I was hoping to improve that with CentOS6 after speaking to the xfs devs and they recommended that.
09:47 mindbender jclift: I'm planning to use "lazy_counts" next, maybe they introduced additional safety features on centos6
09:48 mindbender like write barries and may not perform well with many files
09:48 jclift mindbender: As a thought, the "general" way people get told to create xfs filesystem when using gluster, is mkfs.xfs -i size=512 /dev/to/initialise
09:48 jclift mindbender: Maybe the -i size=512 would make a difference?
09:49 jclift mindbender: I'm not very in depth with xfs though, so have absolutely no better idea here. :)
09:49 mindbender jclift: the thing is, I'm not mkfs'ing the fs, there's an ext. storage and i just change the OS
09:49 Elendrys jclift: ok i'll send on the mailing list too
09:50 mindbender jclift: yeah fair enough, just heard that glusterfs ppl had been doing some troubleshooting from z00dax/centos dev and he suggested i ask :) probably not the best place to ask
09:50 jclift mindbender: Ahhhh
09:51 jclift mindbender: It might also be a "time of day" thing.  I *think* many of the Gluster dev's would only be starting to come online in the next few hours.
09:51 jclift mindbender: Either way, good luck. :)
09:51 mindbender jclift: thanks :)
09:52 tshm Yes, I noticed you get a lot more answers whenever the Americans are awake. :-)
09:52 jclift :D
09:53 kbsingh jclift: i will infiniband you!
09:53 jclift kbsingh: Good man!
09:53 kbsingh :D
09:53 jclift :)
09:54 jclift kbsingh: Am back in Aust until next week.  Tempted to grab some of my IB stuff out of storage, but it wouldn't be useful.  No chance of fitting it my luggage for the flight back. ;D
09:54 kbsingh might set off a few alarm bells at security as well
09:54 jclift Is that a *SERVER?!?!?!* in your pocket?
09:54 * jclift gets back to work
09:55 spider_fingers left #gluster
10:06 rastar1 joined #gluster
10:12 shylesh joined #gluster
10:19 nightwalk joined #gluster
10:20 isomorphic joined #gluster
10:23 venkatesh joined #gluster
10:25 duerF joined #gluster
10:27 nightwalk joined #gluster
10:28 isomorphic joined #gluster
10:42 shylesh joined #gluster
10:51 badone joined #gluster
11:06 flrichar joined #gluster
11:26 hagarth joined #gluster
11:27 kkeithley1 joined #gluster
11:42 roo9 joined #gluster
11:42 edward1 joined #gluster
11:42 roo9 joined #gluster
11:46 roo9 joined #gluster
11:50 flrichar joined #gluster
12:10 flrichar joined #gluster
12:27 andrewjsledge joined #gluster
12:28 dustint joined #gluster
12:32 majeff joined #gluster
12:32 tziOm Is it common to have hangs for 10-20 seconds when reading files? (3.3.1)
12:33 dustint_ joined #gluster
12:41 dustint_ joined #gluster
12:44 yinyin joined #gluster
12:48 stigchristian joined #gluster
12:53 mohankumar joined #gluster
13:08 spider_fingers joined #gluster
13:10 tshm If some of your files need self-healing, I suppose so
13:11 tshm i.e., if the file you're reading first needs to be healed
13:11 Chocobo joined #gluster
13:12 Chocobo Hi All.  What happens if I set replication to 2 and have 3 bricks?
13:22 kkeithley| Chocobo: nothing, it won't work
13:26 rastar joined #gluster
13:28 majeff joined #gluster
13:31 Chocobo kkeithley|: Ahh, good.  Thanks.
13:34 rwheeler joined #gluster
13:34 aliguori joined #gluster
13:35 tziOm I have now tried a setup with distribute only, but performance is still far from cool
13:38 tziOm seems like 3.4 ubuntu repository is not working..
13:38 majeff joined #gluster
13:39 edong23 joined #gluster
13:43 bennyturns joined #gluster
13:43 theron joined #gluster
13:47 Norky joined #gluster
13:49 Chocobo hrmmm, in my peer status I see "Peer Disconnected"  How can I reconnect it?
13:55 portante joined #gluster
13:56 deepakcs joined #gluster
13:57 aliguori joined #gluster
13:57 plarsen joined #gluster
13:59 chirino joined #gluster
14:06 ricky-ticky joined #gluster
14:07 yinyin_ joined #gluster
14:08 hagarth joined #gluster
14:10 Chocobo Err, I am having a bit of an issue.  On ubuntu I can not start it with "service glusterfs-server start" because it says it has been converted to a upstart job.   If I try running it with "start glusterfs-server" it says Unknown job.  Any ideas?
14:13 Chocobo This is odd.  It lists files in the package that aren't there.   like /etc/init/glusterfs-server.conf!   What!?  reinstalling doesn't seem to help
14:16 pjameson joined #gluster
14:18 pjameson I've got a gluster replica pair that I'm messing around with. I attempted to simulate complete failure on one of the nodes by stopping gluster and just mkfs ing over the top of the brick's directory (/mnt/raid). I now can't get the brick service to start and /mnt/raid/.glusterfs isn't being recreated. Does anyone know if this is expected (e.g. in order to recover I have to do something else), or if it might be a bug?
14:21 jtux joined #gluster
14:22 fps joined #gluster
14:26 manik joined #gluster
14:26 bala joined #gluster
14:29 manik1 joined #gluster
14:30 sysconfig joined #gluster
14:31 jbrooks joined #gluster
14:32 fps ok, i have a general question, due to my lack of knowledge and experience with distributed filesystems in general and glusterfs in particular. let's say i have two bricks on two different machines. one machine writes to the glusterfs and calls fsync(). can i expect the other machine to see the changes when fsync() returns?
14:32 fps i.e. machine a] calls fsync(), fsync() returns, a] signals b] that the data is in the fs. is it guaranteed that b] now sees the data? or are there races?
14:33 fps and if there are races, is there a different synchronization mechanism available?
14:34 bugs_ joined #gluster
14:35 ndevos fps: that is the idea, yes - except that it does not matter how many and where your bricks are, your apps should not access them anyway
14:36 fps ndevos: yes, i didn't make that assumption. the data is accessed through the glusterfs only, no direct access to the local filesystems. it is a general question about whether fsync() is implemented with these semantics
14:37 fps i.e. if any node calls fsync() is it guaranteed that when it returns the data is visible on all other nodes..
14:37 fps ?
14:38 kaptk2 joined #gluster
14:38 fps that machine a] and machine b] each "host" a brick was just as a configuration example which maybe illustrated my limited knowledge more than it helped to understand the question :D
14:40 ndevos fps: fsync() should be called on the node that writes the data, and maybe before re-reading data too (it might have been cached?)
14:41 fps ndevos: yes, that the node that wrote the data calls fsync() was included in my original question (" let's say i have two bricks on two different machines. one machine writes to the glusterfs and
14:41 fps calls fsync(). can i expect the other machine to see the changes when fsync() returns?"
14:41 fps oops
14:41 fps sorry, for the cut and paste mishap :(
14:42 ndevos fps: you could write a simple test in one app that uses two different glusterfs mountpoints of the same volume
14:42 fps ndevos: a test is something different than a guarantee.. :D
14:43 ndevos fps: sure, lets say that I expect it to work, but that I dont guarantee it
14:45 fps ndevos: ok. hmm. then if it is not guaranteed, then is there an alternativ way to guarantee the visibility of changed data at a point of time?
14:46 fps without that it is pretty much impossible to implemented reliable distributed processes where one process depends on the output of another
14:47 mohankumar joined #gluster
14:47 ndevos fps: I dont know if it is guaranteed, but I expect it to if you use fsync() and/or fdatasync() correctly, maybe also mount with directio to prevent local (vfs) caches
14:47 ndevos fps: or, open the file with O_DIRECT or something
14:47 fps ndevos:
14:48 fps ok..
14:48 fps thanks for your input. maybe i'll find some definitive documentation on the subject..
14:48 lh joined #gluster
14:48 lh joined #gluster
14:49 ndevos fps: glusterfs tries to be posix compatible, but these things are not easy to get 100% right for a network-fs, I am sure that it is possible to code an application that works as you are expecting
14:51 hchiramm__ joined #gluster
14:51 ndevos fps: I am not sure if fsync() is sufficient, but mounting with O_DIRECT (might require mount-option for directio) and using fsync() would get close
14:51 ndevos s/mounting/opening the file/
14:51 glusterbot ndevos: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
14:52 ndevos glusterbot: I think you are wrong about that
14:53 Chocobo Woa, this is odd.  I have 4 nodes each with 1GB bricks.   df -h shows that gv0 is 7.7GB, 2.4GB used and 5.0GB free!! What!?
14:55 lh joined #gluster
14:56 spider_fingers left #gluster
14:57 Chocobo nm, I had a typo when adding a brick.  Phew.  thought I was crazy.
14:58 glusterbot joined #gluster
15:04 jthorne joined #gluster
15:07 daMaestro joined #gluster
15:10 hchiramm__ joined #gluster
15:17 failshell joined #gluster
15:22 piotrektt_ joined #gluster
15:36 semiosis JoeJulian: pong
15:43 portante joined #gluster
15:44 hchiramm__ joined #gluster
15:56 vpshastry joined #gluster
15:59 zaitcev joined #gluster
16:01 vpshastry joined #gluster
16:04 sprachgenerator joined #gluster
16:06 nueces joined #gluster
16:14 JoeJulian I have no recollection of what I was going to ask/tell you.
16:14 chirino joined #gluster
16:16 JoeJulian Oh, I remember now. It was related to picturemarketing.com . A last minute idea that I had on Saturday for a friend's store grand opening.
16:32 jack_ joined #gluster
16:34 Mo__ joined #gluster
16:40 rastar joined #gluster
16:48 lpabon joined #gluster
17:03 thomaslee joined #gluster
17:06 tziOm joined #gluster
17:11 sprachgenerator has anyone encountered a volume stop command not being reflected/issued on all peers?
17:12 sprachgenerator this is for: glusterfs 3.4.0beta1 built on May  8 2013 01:28:00 - after issuing a volume stop command on an IB mounted volume across 100 peers, a few of the nodes still show the volume as "started"
17:14 sprachgenerator specifically 3 out of the 100
17:15 bennyturns joined #gluster
17:18 sprachgenerator if I stop/restart the glusterfs-server on the nodes that show the volume in a started state - they show no volumes present
17:19 sprachgenerator however all peers are still attached
17:30 zaitcev joined #gluster
17:38 chirino joined #gluster
17:40 rb2k joined #gluster
17:49 mohankumar joined #gluster
17:55 hagarth joined #gluster
17:56 rb2k joined #gluster
18:04 chirino joined #gluster
18:05 samppah @bug 947830
18:05 glusterbot samppah: Bug http://goo.gl/kCtCE low, medium, ---, ndevos, ASSIGNED , On RHS clients machines, installing glusterfs-fuse package does not install fuse package automatically
18:06 linwt__ joined #gluster
18:20 tziOm How is small file performance on ib vs tcp?
18:24 JoeJulian tziOm: better
18:29 tziOm ok..
18:29 tziOm I think that sound like "terrible"
18:34 jbrooks joined #gluster
18:34 JoeJulian tziOm: Well, you asked a very relative question. RDMA by its very nature is going to perform better than passing that data through the kernel.
18:43 tziOm sure, but its not kernel thats the "problem" with gluster seems.
18:44 JoeJulian tziOm: How did you determine that?
18:45 tziOm for example performance of samba when running via gluster and not.
18:46 JoeJulian And how does that tell you that the additional context switching is not a problem?
18:46 rb2k joined #gluster
18:47 tziOm ye..
18:47 JoeJulian Though it would be interesting to compare that to, say, sshfs.
18:47 tziOm but my problems are not related to extreme performance
18:47 tziOm simple things like a opendir/readdir
18:48 tziOm and periodic 5-10-30s hangups when accessing small files..
18:48 JoeJulian Also, if it's samba you're worried about, you might be interested in an email I just saw... let me see if I can find a link to it...
18:48 tziOm vfs stuff?
18:49 JoeJulian yeah
18:49 tziOm any good news on that front?
18:49 tziOm mounting is atleast 10k times faster with samba than with nfs/gluster (autofs usage)
18:51 JoeJulian http://lists.nongnu.org/archive/html​/gluster-devel/2013-05/msg00165.html
18:51 glusterbot <http://goo.gl/MXwrQ> (at lists.nongnu.org)
18:54 nueces joined #gluster
18:55 tziOm JoeJulian, thanks
18:58 JoeJulian tziOm: hmm, missed that "5-10-30s hangups" line earlier.... That's not normal. Any clues in your client logs when that happens?
18:59 tziOm no
18:59 JoeJulian by "small file performance" it means an extra network rtt latency, not seconds.
18:59 Chocobo Hi all.  I am having a problem with GlusterFS blocking on boot Ubuntu 12.04 LTS.  I followed some tips here: http://unix-heaven.org/comment/1854#comment-1854  but they do not seem to help
18:59 glusterbot <http://goo.gl/gHksV> (at unix-heaven.org)
19:00 Chocobo thanks glusterbot
19:00 tziOm JoeJulian, thats what I mean
19:00 tziOm JoeJulian, and what about readdir performance, do you think acceptable? Can I do anything to make this perform?
19:00 JoeJulian Well then, my guess would be that until you solve the underlying issue that causes many seconds of network latency, even rdma isn't going to cure that.
19:01 JoeJulian Unless there are thousands of files in a directory, I've found readdir satisfactory for my needs.
19:02 JoeJulian The two places I did have over 40k dirents I treed out the files and performance is back to acceptable levels.
19:03 JoeJulian Chocobo: LTS... isn't that the one with the broken upstart jobs?
19:05 Chocobo JoeJulian: I am not sure.  Broken upstart jobs in general or for GlusterFS?
19:05 Chocobo I installed glusterfs from https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.3
19:05 glusterbot <http://goo.gl/7ZTNY> (at launchpad.net)
19:06 tziOm JoeJulian, what us your usage?
19:06 rotbeard joined #gluster
19:06 JoeJulian tziOm: Everything from web sites to windows lusers.
19:07 JoeJulian home directories, vm images, mysql (innodb) data
19:10 tziOm JoeJulian, ok.. what is your setup, rep 2 dist ?
19:12 JoeJulian 15 volumes (project specific) each 4x3
19:12 semiosis Chocobo: did you add 'nobootwait' to your fstab mount options?
19:13 Chocobo semiosis: I think that may have fixed it.  Thanks.   Rebooting the VM now.
19:14 ctria joined #gluster
19:14 semiosis Chocobo: i wrote that comment btw
19:15 tziOm JoeJulian, so 15 clients?
19:15 semiosis the one after, by "jeff" is totally wrong
19:15 JoeJulian tziOm: No, I have around 200 clients.
19:15 semiosis http://xkcd.com/386/
19:15 glusterbot Title: xkcd: Duty Calls (at xkcd.com)
19:15 tziOm JoeJulian, is it preferred to do one block per device (raid) or several?
19:15 Chocobo semiosis: Well it didn't block booting but it didn't automount either.  Doing "mount -a" after boot works, but it spits out this error:  unknown option _netdev (ignored)
19:16 semiosis why does everyone call that an error?
19:16 semiosis it's hardly a warning
19:16 Chocobo Thanks for the post semiosis, I assume you also authored the Ubuntu packages I am using.
19:16 semiosis an error is when something can not continue
19:16 semiosis that message clearly says its ignoring something (thus continuing)\
19:16 JoeJulian Depends on the use case, and the admin. I prefer lvm partitions on block devices, each logical volume constrained to one device.
19:17 Chocobo semiosis: sorry, I wasn't even thinking.   warning, info, etc.   I call most non-anticipated information error, even though I know that isn't correct.
19:17 semiosis Chocobo: there's like a million reasons (ok not really, but several) why a remote network mount can fail at boot time. throw a client log up on pastie.org if you want help diagnosing it
19:17 semiosis nobootwait will at least save the rest of your system boot process if the glsuter mount fails
19:18 JoeJulian Well, (figurative) fires are out. I'm going to finally leave the house to head to the office. ttfn.
19:19 tziOm JoeJulian, but is there any general advice on this?
19:19 tziOm JoeJulian, so you make many drives one logical with no raid?
19:28 Chocobo Man, is there a good way to get long log files to a pastie?  I use tmux and grabbing log files is a pain the arse.
19:34 Chocobo semiosis: glustershd.log:http://pastie.org/7940418
19:34 glusterbot Title: #7940418 - Pastie (at pastie.org)
19:34 Chocobo gluster.log http://pastie.org/7940417
19:34 glusterbot Title: #7940417 - Pastie (at pastie.org)
19:37 Chocobo I don't understand the DNS resolution error.  in my /etc/hosts I have "127.0.0.1 test4-vm"
19:38 Chocobo (I am running this from test4-vm
19:39 Keawman joined #gluster
19:48 sefz joined #gluster
19:54 rwheeler joined #gluster
20:00 sjoeboo_ question: for rpm based distros, is i wanted to use rdma, i need the glusterfs-rdma package server side..do i need it client side as well?
20:02 semiosis sjoeboo_: probably
20:03 sjoeboo_ okay.
20:07 semiosis Chocobo: ok i've seen that message before, not sure why it happens :(
20:07 semiosis Chocobo: if you restart over & over, does it happen every time, or just some times?
20:09 Chocobo semiosis: every time
20:09 semiosis it only happened on my test vms sometimes
20:13 jag3773 joined #gluster
20:34 nueces joined #gluster
20:41 rb2k joined #gluster
20:42 mtanner joined #gluster
20:48 Alknelt joined #gluster
20:48 Alknelt Hi blusterers. I have a problem/ question. Is there any possible way to reduce/ turn off rebalancing logs?
20:49 Alknelt glusterers … spell check drat!
20:52 theron joined #gluster
21:01 badone joined #gluster
21:10 Guest79483 joined #gluster
21:19 semiosis Chocobo: your lucky day
21:20 semiosis upgraded one of my pure client machines (prod) to ubuntu precise (from oneiric) and immediately hit the dns resolution at boot problem
21:20 semiosis working on a fix now
21:24 semiosis Chocobo: http://pastie.org/7940902 <-- try that
21:24 glusterbot Title: #7940902 - Pastie (at pastie.org)
21:24 semiosis works for me on ubuntu precise
21:24 semiosis at least one boot
21:24 semiosis it blocks mounting until all the static network interfaces are up
21:25 JonnyNomad joined #gluster
21:27 duerF joined #gluster
21:31 glusterbot New news from newglusterbugs: [Bug 965869] Redundancy Lost with replica 2 and one of the servers rebooting <http://goo.gl/rHFrW>
21:39 Alknelt Is there a way to reduce logging of rebalancing? Logs fill the file system in less than 24 hours.
21:39 Nagilum_ reduce loglevel
21:50 Alknelt Nagilum, tried that for glusterd. Didn't make a difference. I changed it to Error.
21:51 Nagilum_ what kind of messages do you see? I? W? E?
21:52 Alknelt A message is logged for every file moved.
21:52 Nagilum_ what kind of message?
21:52 Alknelt I believe they are 'I'. I don't have any atm
21:53 Nagilum_ there is diagnostics.brick-log-level and diagnostics.client-log-level, so there should not be too much testing required
21:53 Alknelt Mostly 'I'. I found a rebalance log.
22:10 nightwalk joined #gluster
22:23 Guest79483 joined #gluster
22:34 JonnyNomad I'm getting "{path} or a prefix of it is already part of a volume". This is a new install, how could it already be part of a volume?
22:34 glusterbot JonnyNomad: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
22:43 rb2k joined #gluster
22:45 JonnyNomad well that link was less than helpful.  :(
22:46 JoeJulian Well that's a first.
22:46 JonnyNomad That link assumes that I'm removing a brick and trying to reuse it. This is a fresh install and the first use of said brick.
22:47 JoeJulian btw... there were two links there...
22:47 JoeJulian Are you using a non-local path for your brick?
22:48 JonnyNomad nope
22:49 JoeJulian What's the command you're using to create the volume?
22:50 JonnyNomad gluster volume create activemq replica 2 transport tcp gluster1:/mqvol/brick gluster2:/mqvol/brick
22:50 nueces joined #gluster
22:51 JoeJulian Did you check to make sure the hostnames resolve correctly? (I'm pretty sure that'd be a different error, but still...)
22:51 JonnyNomad I did check that already, yes.
22:51 JoeJulian /mqvol/brick is a formatted filesystem
22:52 JonnyNomad formatted with xfs
22:52 JoeJulian hmm
22:52 JoeJulian "getfattr -m . -d -e hex /mqvol/brick" on both boxes
22:52 JoeJulian If you have fpaste or dpaste installed, you can pipe it
22:53 JoeJulian Chocobo: You asked about that earlier as well. fpaste (on rpm based distros) or dpaste (on deb) are great command line tools you can pipe to.
22:53 JonnyNomad I get no output.
22:54 JoeJulian selinux?
22:54 JonnyNomad shouldn't be. checking now.
22:55 JonnyNomad disabled
22:57 JonnyNomad here's the weird thing; I was able to get my proof-of-concept environment working without incident but when starting on the production environment, it goes south on me.
22:57 JonnyNomad but I built the systems the same.
22:57 JoeJulian Of course... <sigh/>
22:57 JoeJulian #define insanity....
22:58 JoeJulian python -c 'import os; print os.path.realpath("/mqvol/brick")'
22:58 JoeJulian If it's that realpath bug, I would think this would error.
22:59 JonnyNomad no error
22:59 JoeJulian and it printed /mqvol/brick
22:59 JonnyNomad that is correct
23:00 JonnyNomad this is, btw, GlusterFS 3.3.1 on Ubuntu 12.04 LTS 64-bit.
23:00 JoeJulian gah! I'm not looking at the bigger picture...
23:00 JoeJulian getfattr -m . -d -e hex {/,/mqvol}
23:04 JonnyNomad okay, have output from that. trusted.glusterfs.volume-id=0x0​7114d0724a546478df0dbd3e3c7507b
23:05 JoeJulian So now back to the less-than-helpful link to clear that...
23:06 JonnyNomad didn't get an error that time.
23:07 JonnyNomad unlike when I first tried it.
23:07 JonnyNomad weird.
23:07 JonnyNomad I'm going to be stupid embarrassed if this is a pebkac thing.
23:07 JoeJulian hehe
23:07 JoeJulian Was this imaged from the dev environment?
23:08 vpshastry joined #gluster
23:08 JonnyNomad it was an os template. the template was created before gluster was installed.
23:09 * JoeJulian shrugs
23:09 JonnyNomad my history file tells me that I'm an idiot, however.
23:09 JoeJulian Hey, at least you had me going down the wrong path too. :D
23:09 JoeJulian lol
23:09 JonnyNomad apparently someone secretly swapped a couple of keys, but then put them back.
23:10 JonnyNomad my apologies for the boondoggle.
23:10 JoeJulian glad I could help.
23:10 JonnyNomad thank you for the help.  :)
23:10 semiosis glusterbot: keep tryin' better luck next time
23:11 glusterbot I'm not happy about it either.
23:11 JonnyNomad and my volume created. Thanks again.
23:46 jag3773 joined #gluster
23:50 badone joined #gluster
23:50 edong23 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary