Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 jag3773 joined #gluster
00:05 social keytab: it should always be UTC
00:44 sjm left #gluster
01:44 MacWinne_ joined #gluster
01:45 bala joined #gluster
01:46 MacWinne_ Hi, i'm trying to troubleshoot why my glusterfsd takes 20% of a CPU when I do an ls operation on a directory that is mounted via the gluster client.
01:46 MacWinne_ there are about 1800 directories in the folder
01:47 MacWinne_ time ls | wc -l   => real0m3.302s, user0m0.008s, sys0m0.007s
01:47 MacWinne_ i notced some forum post about checking tcp segment retransmits.. i seem to have those, but I'm not sure what a normal count looks like
01:49 MacWinne_ everything is connected via gigE private network.. .2 ms ping times.  4 nodes.  2 replica sets
01:57 vpshastry joined #gluster
02:00 XpineX_ joined #gluster
02:18 XpineX_ joined #gluster
02:33 MacWinne_ my logs under /var/log/glusterfs/blicks look weird.. rather than new entries going into brick-data.log, they are going into brick-data.log-20140525
02:33 MacWinne_ brick-data.log is empty..  seems new entries are going into the rolled over file
02:41 bharata-rao joined #gluster
02:47 haomaiwang joined #gluster
02:54 RameshN joined #gluster
02:55 haomaiwa_ joined #gluster
03:00 glusterbot New news from newglusterbugs: [Bug 1102989] [libgfapi] glfs_open doesn't works for O_CREAT flag <https://bugzilla.redhat.co​m/show_bug.cgi?id=1102989>
03:09 gdubreui joined #gluster
03:11 rjoseph joined #gluster
03:12 haomai___ joined #gluster
03:18 gildub joined #gluster
03:20 vpshastry joined #gluster
03:26 aravindavk joined #gluster
03:27 haomaiwang joined #gluster
03:27 sputnik13 joined #gluster
03:28 ppai joined #gluster
03:30 kshlm joined #gluster
03:31 primechuck joined #gluster
03:39 vpshastry left #gluster
03:46 lalatenduM joined #gluster
03:51 sputnik13 joined #gluster
03:51 dusmant joined #gluster
03:52 sputnik13 joined #gluster
03:54 shubhendu_ joined #gluster
03:59 kanagaraj joined #gluster
04:03 DV__ joined #gluster
04:19 sage__ joined #gluster
04:22 psharma joined #gluster
04:23 jrcresawn joined #gluster
04:43 dusmantkp_ joined #gluster
04:50 nishanth joined #gluster
04:54 vpshastry joined #gluster
04:54 vpshastr1 joined #gluster
04:54 vpshastry left #gluster
04:54 spandit joined #gluster
04:57 kdhananjay joined #gluster
05:00 ctria joined #gluster
05:04 Mystica joined #gluster
05:04 Mystica left #gluster
05:13 davinder6 joined #gluster
05:13 dusmantkp__ joined #gluster
05:15 haomaiwang joined #gluster
05:35 sputnik13 joined #gluster
05:35 bala joined #gluster
05:39 dusmant joined #gluster
05:40 kumar joined #gluster
05:50 hagarth joined #gluster
05:51 primechuck joined #gluster
05:52 meghanam joined #gluster
05:52 meghanam_ joined #gluster
05:54 davinder6 joined #gluster
05:54 rjoseph joined #gluster
05:57 raghu joined #gluster
06:00 dusmant joined #gluster
06:02 kanagaraj joined #gluster
06:13 Philambdo joined #gluster
06:18 ramteid joined #gluster
06:18 vimal joined #gluster
06:20 kanagaraj joined #gluster
06:24 kanagaraj_ joined #gluster
06:28 ricky-ti1 joined #gluster
06:32 DV__ joined #gluster
06:38 rjoseph joined #gluster
06:38 hagarth joined #gluster
06:46 kanagaraj joined #gluster
07:01 harish joined #gluster
07:03 ctria joined #gluster
07:05 kdhananjay joined #gluster
07:05 rgustafs joined #gluster
07:07 karnan joined #gluster
07:11 eseyman joined #gluster
07:14 XpineX joined #gluster
07:23 kanagaraj joined #gluster
07:30 kdhananjay1 joined #gluster
07:31 glusterbot New news from newglusterbugs: [Bug 1100204] brick failure detection does not work for ext4 filesystems <https://bugzilla.redhat.co​m/show_bug.cgi?id=1100204>
07:33 fsimonce joined #gluster
07:38 kanagaraj_ joined #gluster
07:41 kanagaraj_ joined #gluster
07:42 saurabh joined #gluster
07:43 nshaikh joined #gluster
07:46 haomaiwa_ joined #gluster
07:49 haomaiw__ joined #gluster
07:56 nage joined #gluster
07:57 neoice joined #gluster
08:12 ngoswami joined #gluster
08:14 primusinterpares joined #gluster
08:33 andreask joined #gluster
08:34 kanagaraj joined #gluster
08:35 ProT-0-TypE joined #gluster
08:50 karimb joined #gluster
08:52 ktosiek joined #gluster
08:53 haomaiwa_ joined #gluster
08:55 vimal joined #gluster
09:00 qdk_ joined #gluster
09:04 sm1ly joined #gluster
09:05 vikhyat joined #gluster
09:06 sm1ly hello all. ppl. tell me plz. if I mount mount.glusterfs nfs1.abboom.world:/testvol -o direct-io-mode=disable /mnt/gluster what it be if nfs1 down? does client will work? I got nfs1 and nfs2
09:13 sm1ly hello all. ppl. tell me plz. if I mount mount.glusterfs nfs1.abboom.world:/testvol -o direct-io-mode=disable /mnt/gluster what it be if nfs1 down? does client will work? I got nfs1 and nfs2
09:15 kanagaraj joined #gluster
09:17 Pupeno joined #gluster
09:19 haomaiwa_ joined #gluster
09:20 sm1ly anyone?
09:23 vikhyat sm1ly: Hi, what is your volume type is it distributed , pure replicate or distribute-replicate
09:25 sm1ly vikhyat, I doing gluster volume create abboom replica 2 transport tcp osd200.abboom.world:/data osd201.abboom.world:/data
09:28 nshaikh joined #gluster
09:31 sm1ly vikhyat, so it does just clone to 2 big srvs
09:32 sm1ly vikhyat, I mean when I mount it on client, must I use 2 something like nfs1:/data,nfs2:/data
09:33 kanagaraj joined #gluster
09:34 vikhyat sm1ly: not like that
09:35 vikhyat sm1ly: you have to specify volume name
09:35 vikhyat sm1ly: on client
09:35 sm1ly vikhyat, but like what? its just synced. if nfs1 (which point of mount)  goes down will nfs2 take clients. yes, my mistake. nfs1:abboom
09:36 sm1ly oooh nfs1:/abboom
09:36 sm1ly I understand that. my question are will nfs2 take load, or I need use 2 mount points?
09:36 vikhyat sm1ly: yup if you are using mount.glusterfs
09:37 sm1ly yes mount.glusterfs from glusterfs-client
09:37 vikhyat sm1ly: yup
09:37 sm1ly vikhyat, thx a lot
09:38 vikhyat sm1ly: and if you will use mount.nfs , you should use ctdb ip failover
09:38 sm1ly vikhyat, no, no fuse
09:38 vikhyat sm1ly: great then no issues
09:38 sm1ly thx
09:38 vikhyat sm1ly: welcome
09:38 sm1ly vikhyat, have a nice day)
09:39 vikhyat sm1ly: you too :)
09:39 sm1ly vikhyat, I tryied cephfs, but it died when I recursevly try chmod to ~5000 little file
09:39 sm1ly files*
09:42 vikhyat sm1ly: okay
09:45 sm1ly vikhyat, I got gluster-epel.repo on centos 6.5. and http://pastebin.com/WvkvHrDR there is no glusterfs-client now. with 3.5 if I install glusterfs, I havent got mount.glusterfs. what I doing wrong??
09:45 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
09:46 sm1ly http://fpaste.org/105840/40144316/
09:46 glusterbot Title: #105840 Fedora Project Pastebin (at fpaste.org)
09:47 ndevos sm1ly: the package is called glusterfs-fuse
09:47 sm1ly ndevos, its not fuse I think
09:47 sm1ly fuse are for nfs
09:47 sm1ly ok. I see
09:47 sm1ly thx
09:47 ndevos sm1ly: :)
09:48 kanagaraj joined #gluster
09:48 sm1ly ndevos, yum provides */mount.glusterfs told me)))
09:48 sm1ly rpm -qa | grep glust
09:48 sm1ly ooooh sorry
09:54 sm1ly ndevos, vikhyat and last one question. if i mount like this: mount.glusterfs osd200.abboom.world:/abboom -o direct-io-mode=disable,noatime /usr/share/nginx/html does this correct for fstab: osd200.abboom.world:/abboom    /usr/share/nginx/html    glusterfs    defaults,_netdev,direct-io-mode=disable,noatime    0    0 ???
09:56 ndevos sm1ly: yeah, that looks good, but the 'defaults' is not really needed
09:56 sm1ly ndevos, i must change it or just rm ?
09:57 ndevos sm1ly: you can just remove it, if you use other options, the 'defaults' placeholder isnt needed anymore
09:57 sm1ly thx. just in some panic after ceph brockes
09:57 sm1ly brokes*
10:01 sm1ly ndevos, any suggestions for tuning fs for small files? I use this options on vol: http://fpaste.org/105841/01444064/
10:01 glusterbot Title: #105841 Fedora Project Pastebin (at fpaste.org)
10:02 glusterbot New news from newglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075611>
10:07 _Bryan_ joined #gluster
10:10 bene2 joined #gluster
10:10 ndevos sm1ly: my suggestion is to test your workload before and after you change an option, performance tuning tends to be different for each use-case and environment
10:11 ndevos sm1ly: gluster tries to use sane and most optimal values, only change options if you need to
10:12 kanagaraj joined #gluster
10:12 sm1ly ndevos, a lot of small files (1 to 6 mb )
10:13 ndevos sm1ly: yes, and 1 MB might be small for you, but not for someone else... and 'a lot' differs too, it is not really possible to suggest any fixed values, I think
10:14 sm1ly ndevos, 1-2millions growing to 5-6 in year or about
10:15 primechuck joined #gluster
10:18 ndevos sm1ly: I really can't say much about it, access patterns differ per use-case...
10:20 sm1ly ndevos, thx for ur cooperation
10:29 kanagaraj joined #gluster
10:33 haomaiwang joined #gluster
10:35 haomai___ joined #gluster
10:36 aravindavk joined #gluster
10:42 fsimonce joined #gluster
10:43 tryggvil joined #gluster
10:53 ira joined #gluster
10:54 kanagaraj_ joined #gluster
10:55 kanagaraj_ joined #gluster
11:02 glusterbot New news from newglusterbugs: [Bug 1084508] read-ahead not working if open-behind is turned on <https://bugzilla.redhat.co​m/show_bug.cgi?id=1084508> || [Bug 1057292] option rpc-auth-allow-insecure should default to "on" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1057292>
11:13 aravindavk joined #gluster
11:18 jwww joined #gluster
11:29 sjm joined #gluster
11:32 RameshN joined #gluster
11:32 glusterbot New news from newglusterbugs: [Bug 1024465] Dist-geo-rep: Crawling + processing for 14 million pre-existing files take very long time <https://bugzilla.redhat.co​m/show_bug.cgi?id=1024465>
11:32 RameshN joined #gluster
11:38 edward1 joined #gluster
11:38 kanagaraj joined #gluster
11:41 kanagaraj joined #gluster
11:42 P0w3r3d joined #gluster
11:48 xymox joined #gluster
11:53 bala1 joined #gluster
11:54 xymox joined #gluster
11:58 kanagaraj joined #gluster
12:05 diegows joined #gluster
12:13 sjm left #gluster
12:13 sjm joined #gluster
12:20 hagarth joined #gluster
12:28 ricky-ti1 joined #gluster
12:31 lpabon joined #gluster
12:46 haomaiwa_ joined #gluster
13:00 chirino joined #gluster
13:01 japuzzo joined #gluster
13:04 XpineX_ joined #gluster
13:10 sroy_ joined #gluster
13:10 jcsp_ joined #gluster
13:17 bennyturns joined #gluster
13:17 marcoceppi joined #gluster
13:17 marcoceppi joined #gluster
13:19 bala1 joined #gluster
13:19 rwheeler joined #gluster
13:24 bala2 joined #gluster
13:24 Pupeno_ joined #gluster
13:27 bnh2 joined #gluster
13:27 bnh2 hi
13:27 glusterbot bnh2: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:27 plki joined #gluster
13:27 plki anyone alive?
13:28 plki what would be best practices on an 80TB gluster system? how many servers?
13:29 haomaiwang joined #gluster
13:30 bnh2 Is there a way i can get a report on glusterFS issues on my machine, i.e. "I'm having trouble with glusterfs process running on 50% and i would like to know why"?? this is happening on gluserFSclient machine
13:31 haomai___ joined #gluster
13:31 mjsmith2 joined #gluster
13:36 sjm joined #gluster
13:37 tdasilva joined #gluster
13:38 ndevos plki: mostly the storage is not the issue, but the number of clients and such is more important to know - more servers makes it possible to have a bigger throughput given sufficient clients
13:41 plki like an example configuration with 50 nfs clients in distributed replication mode
13:41 plki is there a best practices guide?
13:43 ndevos maybe http://rhsummit.files.wordpress.com/2013/07/eng​land_th_0450_rhs_perf_practices-4_neependra.pdf ?
13:44 bnh2 Is there a way i can get a report on glusterFS issues on my machine, i.e. "I'm having trouble with glusterfs process running on 50% and i would like to know why"?? this is happening on gluserFSclient machine
13:46 plki hrmm
13:47 ndevos plki: you probably will want to give all your servers a virtual-ip, and use a single rrdns hostname for mounting, that should distribute the nfs-traffic a little
13:48 ndevos if one of the servers goes down, the virtual-ip should be taken over by an other server, so that any client will still be able to connect to th IP
13:48 plki ndevos, yeah but to reach say ~100TB what would be best practices for hardware?
13:48 plki couple servers?
13:48 plki bricks per server?
13:50 ndevos plki: general advise is 12 disks per RAID-group, RAID-6 or RAID-10, and see what disks you want to use
13:52 ndevos plki: depending on the size of the disks, you'll create more RAID-groups an format each one them as a brick - maybe use LVM to split a group in smaller pieces if you prefer working with smaller filesystems
13:53 plki ndevos, ok and how many bricks per server and is there a ram requirement?
13:54 ndevos plki: it depends on the use-case, https://access.redhat.com/site/articles/66206 contains some examples
13:56 rgustafs joined #gluster
13:57 samkottler joined #gluster
13:58 daMaestro joined #gluster
14:00 shubhendu joined #gluster
14:02 harish joined #gluster
14:02 plki ndevos, what are my mount options nfs and fuse?
14:03 haomaiwang joined #gluster
14:04 mjsmith2 joined #gluster
14:04 ndevos plki: yes, nfs and fuse, or you can setup samba so that clients can mount over cifs
14:04 ndevos elluminate--
14:05 * ndevos notices he'd put that in the wrong channel
14:05 andreask joined #gluster
14:12 wushudoin joined #gluster
14:15 bnh2 glusterFS process runs on 50% is there a way to find out why its doing this?
14:18 karimb joined #gluster
14:20 recidive joined #gluster
14:20 plki so if I do two nodes with 2 bricks per server in replicated distrubuted I would get 2 bricks worth of space?
14:21 haomai___ joined #gluster
14:24 foobar plki: yup
14:26 foobar plki: http://paste.sigio.nl/pbspcatnt ... shows a 2 node, 3 disks per node, gluster... this has the capacity of 3 disks
14:26 glusterbot Title: Sticky Notes (at paste.sigio.nl)
14:26 foobar erm... 3 node, 2 disks per node, capacity of 2 nodes... i mean
14:29 davinder6 joined #gluster
14:40 ndk joined #gluster
14:40 primechuck joined #gluster
14:42 sm1ly joined #gluster
14:55 lmickh joined #gluster
15:00 kanagaraj joined #gluster
15:04 sputnik13 joined #gluster
15:10 theron joined #gluster
15:10 haomaiwa_ joined #gluster
15:14 haomai___ joined #gluster
15:15 theron joined #gluster
15:22 gmcwhistler joined #gluster
15:24 jag3773 joined #gluster
15:33 glusterbot New news from newglusterbugs: [Bug 928656] nfs process crashed after rebalance during unlock of files. <https://bugzilla.redhat.com/show_bug.cgi?id=928656>
15:34 rotbeard joined #gluster
15:35 cvdyoung joined #gluster
15:40 _jmp_ joined #gluster
15:41 karimb left #gluster
15:54 lachlan_munro joined #gluster
16:03 jcsp joined #gluster
16:07 haomaiwang joined #gluster
16:10 jcsp joined #gluster
16:16 vpshastry joined #gluster
16:16 vpshastry left #gluster
16:17 haomai___ joined #gluster
16:17 calum_ joined #gluster
16:21 vpshastry1 joined #gluster
16:23 B21956 joined #gluster
16:29 semiosis_ anyone have an opinion on this bug? https://bugs.launchpad.net/ubun​tu/+source/libvirt/+bug/1297218
16:29 glusterbot Title: Bug #1297218 “guest hangs after live migration due to tsc jump” : Bugs : “libvirt” package : Ubuntu (at bugs.launchpad.net)
16:30 jbd1 joined #gluster
16:32 mdavidson joined #gluster
16:36 mdavidson Is there any way of renaming a gluster volume?
16:38 semiosis_ you'll need downtime.  maybe replacing the term on the volfiles on all the servers, or recreating the volume with the same bricks
16:39 mdavidson I can cope with some downtime - just don't want to copy all the files somewhere else
16:40 semiosis_ well you could try stopping all the gluster processes on all servers and doing a search/replace for the volume name in /var/lib/glusterd
16:40 semiosis_ no idea if that will work though, i've never tried
16:43 liammcdermott joined #gluster
16:45 mdavidson I'll give it a go on a test setup ;-)
16:45 semiosis_ great, let us know how it goes
16:46 mdavidson looks like i'll need to rename alot of files too
16:46 semiosis_ yep
16:48 theron_ joined #gluster
16:49 liammcdermott Hello, am finding these instructions don't work anymore (in v3.4.2) http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
16:49 glusterbot Title: GlusterFS: {path} or a prefix of it is already part of a volume (at joejulian.name)
16:50 liammcdermott Am attempting to delete and create a volume again, but with a different name.
16:52 liammcdermott Does anyone know if the instructions for working around this bug have changed?
16:53 semiosis_ liammcdermott: heh, i just advised mdavidson to go a different route to avoid that issue
16:53 semiosis_ liammcdermott: afaik those instructions are correct
16:53 semiosis_ haven't done it in a while myself though
16:54 liammcdermott Someone else in the comments also mentioned it doesn't work.
16:54 semiosis_ strange how so many people have so much trouble with that
16:55 liammcdermott Maybe the priority of the bug to fix it should be raised then. :)
16:55 semiosis_ and yet many others get along fine
16:55 liammcdermott I did actually delete the volume, so gluster is daft to think the directory is still part of one.
16:55 semiosis_ liammcdermott: did you remove the xattrs from the brick dir and all parent dirs above it?
16:56 liammcdermott Yep, I'll give it another try.
16:56 liammcdermott I'm assuming I need to do this on every node too?
16:56 _dist joined #gluster
16:56 semiosis_ every brick
16:57 liammcdermott Yes, that's the right term.
17:05 liammcdermott semiosis_, went through the instructions again and it worked this time. Must've missed something on one of the bricks the first time around.
17:05 semiosis_ \o/
17:05 liammcdermott So it does still work on 3.4.2. Still an incredibly rage-inducing bug, but oh well. :)
17:06 semiosis_ not sure that's really a bug.  i'm comforted by the fact that deleting a volume config doesnt touch the bricks at all.
17:06 semiosis_ but meh
17:07 _dist I can confirm (as I have done it many times) that deleting a volume is more a metadata thing, it doesn't wipe the actual data
17:08 liammcdermott I'm not expecting it to wipe data, just let me re-create the volume with a different name
17:08 _dist liamcdermott: I didn't catch the first part of the conversation, but in my experience I don't think you can "re-use" bricks if that's what you mean
17:10 liammcdermott _dist, so really I should've just used a different directory?
17:10 liammcdermott Probably would've been easier, since I didn't have any data in the volume yet.
17:10 semiosis_ you *can* reuse bricks, you just have to fix up the xattrs first
17:11 liammcdermott Well, I would expect a volume delete to do that for me, since the brick isn't in use anymore. That's why it's a bug IMO.
17:11 _dist semiosis_: yeap most people don't want to do that though, it'd be nicer if deleting a volume removed them, and if adding a volume to an existing set of data added them too
17:12 semiosis_ most people dont delete volumes :)
17:12 liammcdermott So?
17:13 liammcdermott Anyway, got it done. Thanks very much! :)
17:13 semiosis_ you're welcome to voice your opinion on the bug tracker, but i'm pretty sure this is NOTABUG
17:14 semiosis_ it's a feature added to protect against data corruption due to accidentally adding an existing brick to a volume, which can be bad
17:14 liammcdermott It's: https://bugzilla.redhat.com/show_bug.cgi?id=812214 I believe
17:14 glusterbot Bug 812214: medium, high, 3.3.0beta, kparthas, CLOSED CURRENTRELEASE, [b337b755325f75a6fcf65616eaf4467b70b8b245]: add-brick should not be allowed for a directory which already has a volume-id
17:14 semiosis_ so it's a slight inconvenience for something most people rarely, if ever, do
17:15 liammcdermott And yet lots of people have this issue and the brick isn't part of a volume.
17:15 _dist I think it's a matter of opinion, deleting a volume does work without issue. Really the problem is in the add brick part where it doesn't like old xattr stuff and doens't tag existing data. But, hard to say it's a bug, it's just a feature that isn't there yet
17:15 liammcdermott Hang on sorry, that bug's probably not what I thought it was
17:16 _dist Best practice is to just start fresh always.
17:16 semiosis_ right
17:16 liammcdermott _dist, yes I'll do that next time.
17:17 _dist I think I'm going to have to start fresh soon myself, as I just found out zfs stores xattr data as actual files by default, I'm certain it's slowing things down and suspect it _may_ be related to my vm healing issue.
17:18 semiosis_ _dist: seen this? http://www.gluster.org/community/do​cumentation/index.php/GlusterOnZFS
17:18 glusterbot Title: GlusterOnZFS - GlusterDocumentation (at www.gluster.org)
17:18 semiosis_ article's a year old already but it might be useful
17:19 _dist I followed it, but it doens't explicitly say to use xattr=sa in the steps. It does however mention if you do be the current version cause there was a symlink bug
17:20 bennyturns joined #gluster
17:21 _dist I agree with almost all of it, it's a good place to start from. Might be helpful to add something to explain further the xattr setting.
17:22 semiosis_ feel free to comment on the talk side of the article, or edit the text directly if you wish
17:22 semiosis_ public wiki :)
17:22 _dist oh, I didn't notice.
17:23 Matthaeus joined #gluster
17:23 _dist I'm going to wait until I run my own tests to verify, my current gluster is on zfs with xattr=on not sa. Based on this https://github.com/zfsonlinux/zfs/issues/443 <-- though I think it makes good sense to use xattr sa
17:23 glusterbot Title: Implement SA based xattrs · Issue #443 · zfsonlinux/zfs · GitHub (at github.com)
17:24 [o__o] joined #gluster
17:27 MacWinne_ joined #gluster
17:33 glusterbot New news from newglusterbugs: [Bug 1010747] cp of large file from local disk to nfs mount fails with "Unknown error 527" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1010747>
17:35 davinder6 joined #gluster
17:40 mdavidson I renamed the files and changed the reverences in the files to the new volume name, but must be missing something, when I start glusterd then a vold/gv0 (the old name) is created and populated
17:48 Mo__ joined #gluster
17:55 rwheeler joined #gluster
18:00 Matthaeus joined #gluster
18:01 zaitcev joined #gluster
18:11 sputnik13 joined #gluster
18:16 ProT-0-TypE joined #gluster
18:17 Matthaeus joined #gluster
18:22 bet_ joined #gluster
18:48 [o__o] joined #gluster
19:03 Matthaeus joined #gluster
19:03 ekuric joined #gluster
19:08 sputnik13 joined #gluster
19:12 sputnik13 joined #gluster
19:13 sputnik13 joined #gluster
19:16 _Bryan_ joined #gluster
19:25 ira joined #gluster
19:37 davinder6 joined #gluster
19:51 plki left #gluster
20:02 ktosiek joined #gluster
20:04 glusterbot New news from newglusterbugs: [Bug 1103347] Crash in uuid_unpack under heavy load <https://bugzilla.redhat.co​m/show_bug.cgi?id=1103347>
20:09 Matthaeus joined #gluster
20:16 d-fence joined #gluster
20:17 hflai_ joined #gluster
20:18 _pol joined #gluster
20:18 \malex\_ joined #gluster
20:19 \malex\ joined #gluster
20:20 ProT-0-TypE joined #gluster
20:20 XpineX_ joined #gluster
20:20 social__ joined #gluster
20:20 cfeller_ joined #gluster
20:21 theron joined #gluster
20:22 tdasilva left #gluster
20:24 Matthaeus joined #gluster
20:27 marmalodak https://bugzilla.redhat.co​m/show_bug.cgi?id=1102460
20:27 glusterbot Bug 1102460: unspecified, unspecified, ---, kparthas, NEW , 0-rpc-service: Could not register with portmap
20:27 marmalodak only one person on the CC list
20:27 marmalodak does that mean only one person has been notified of it?
20:47 Matthaeus joined #gluster
21:10 san joined #gluster
21:10 san ubuntu 14.04 auto mount client
21:10 Guest93993 Can anyone help me fix automount client on ubuntu 14.04 ?
21:11 Guest93993 [fuse-bridge.c:5444:fini] 0-fuse: Unmounting
21:11 semiosis_ Guest93993: what version of glusterfs?
21:13 Guest93993 @semiosis can you please help ?
21:14 JoeJulian Guest93993: Not if you don't answer his questions.
21:14 Guest93993 @JoeJulian Sorry, I dont see any questions
21:15 semiosis_ what version of glusterfs?
21:15 _pol joined #gluster
21:16 Guest93993 3.5
21:16 semiosis_ did you install it from my ,,(ppa) ?
21:16 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
21:16 Guest93993 Yes from your ppa
21:16 semiosis_ please put the client log file on pastie.org
21:18 Guest93993 http://pastie.org/9240651
21:18 glusterbot Title: #9240651 - Pastie (at pastie.org)
21:19 Guest93993 looks like, the mounting is occuring before IPoIB is up while booting
21:19 semiosis_ a ha!
21:23 Guest93993 @Semiosis_ can you help ?
21:25 Guest93993 notice /me testing
21:25 ktosiek how can I fix gfid mismatch? The files are the same (they have the same checksum, at least) and I don't care which files metadata will "win"
21:26 ktosiek (it's on gluster 3.2.something)
21:26 semiosis_ Guest93993: i'm very busy right now and don;t know anything about IPoIB.  i would need to do some research (maybe another time), or get my hands on a test setup (not going to happen). sorry
21:27 ktosiek 3.2.6
21:27 Guest93993 thanks for informing
21:27 JoeJulian ktosiek: Just delete the "bad" one.
21:27 JoeJulian ... from the brick
21:27 semiosis_ ,,(gfid mismatch)
21:27 glusterbot http://community.gluster.org/a/alert-​glusterfs-release-for-gfid-mismatch/
21:27 ktosiek the whole file?
21:27 semiosis_ upgrade
21:28 JoeJulian ktosiek: yes (copy it somewhere if you're scared)
21:28 JoeJulian And yes, upgrade!
21:28 JoeJulian @forget gfid mismatch
21:28 glusterbot JoeJulian: The operation succeeded.
21:28 ktosiek cool, it worked
21:28 ktosiek JoeJulian: thanks
21:28 JoeJulian You're welcome.
21:28 ktosiek I'll upgrade this thing to 3.5 ASAP
21:29 JoeJulian OR at least 3.4
21:29 ktosiek for now I have to keep this thing (more or less) running
21:29 ktosiek semiosis_: that gfid URL gave me 404
21:29 semiosis_ bummer
21:29 semiosis that was weird
21:30 semiosis how long have I been semiosis_?!
21:31 ktosiek BTW can I use glusterfs mount as a brick?
21:32 ktosiek that would really help me with upgrades (as I would have a zero-downtime upgrade path)
21:33 JoeJulian semiosis: Looks like https://botbot.me/freenode/gluster/msg/15429848/
21:33 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
21:34 semiosis JoeJulian: hah. it was a rhetorical question.  but thanks!
21:34 JoeJulian hehe
21:36 Matthaeus joined #gluster
21:40 _dist JoeJulian: did you see my comments earlier about a suspicion that the zfs xattr default storage method may be related to the heal issue? I haven't tested it yet, but I was wondering if the person who verified it was also on zfs
21:51 sputnik13 joined #gluster
21:52 Guest93993 @semiosis : Do I need to create a separate mounting-glusterfs-<mountpoint>.conf file similar to mounting-gluster.conf file ?
21:53 semiosis Guest93993: try adding a post-start script stanza with sleep 30 in it
21:53 semiosis or some variation on that
21:53 semiosis a real correct solution depends on how the IPoIB stuff is started during boot, which I dont know
21:53 semiosis so a hack would be to just delay the mount with a sleep
21:53 semiosis maybe
22:02 _pol joined #gluster
22:03 JoeJulian _dist: No, they were using xfs
22:04 _dist JoeJulian: oh well, at least the xattr change will give me better speed. let me know when you start looking into it I'll eagerly perform testing :)
22:08 JustinClift Guest93993: Can Ubuntu's startup stuff define dependant relationships between services?
22:08 JustinClift Guest93993: eg adjust the Gluster config to depend on working IB/IPoIB?
22:10 Guest93993 the old upstart script used to work with gluster 3.4 : http://pastie.org/private/vylx55sx2pcr7kfsovibna
22:10 glusterbot Title: Private Paste - Pastie (at pastie.org)
22:10 Guest93993 http://pastie.org/9240760
22:10 glusterbot Title: #9240760 - Pastie (at pastie.org)
22:12 semiosis Guest93993: maybe something changed with how IPoIB works
22:12 JustinClift Guest93993: With the "IFACE=ib0" bit, is ib0 the IB or IPoIB interface?
22:12 JustinClift Note - it's been months since I touched IB.  Super rusty now. :/
22:13 Guest93993 it is ipoib defined in /etc/network/interfaces
22:13 JustinClift Hmmm, I have no idea then. :(
22:13 JustinClift Guest93993: Ask on the mailing lists?
22:13 Guest93993 will try
22:14 Guest93993 pre, post scripts dont work with sleep 30 seconds
22:14 Guest93993 the order is simply messed up as per dmesg
22:18 semiosis JustinClift: re: Can Ubuntu's startup stuff define dependant relationships between services?
22:18 semiosis yes, through the magic of Upstart
22:18 JustinClift Cool
22:18 semiosis figuring out exactly how to specify that relationship is the hard part
22:19 JustinClift ;)
22:19 semiosis if only it were as easy as "start on working IB/IPoIB"
22:19 semiosis but i suspect it's not
22:19 semiosis the mount blocker holds glusterfs mounts until the 'static-network-up' event fires
22:20 semiosis i wonder... a) is IPoIB even upstartified?  b) does static-network-up wait for IPoIB (seems to not)  and c) should it?
22:25 semiosis is there any way for me to test this with a virtual IPoIB device?
22:25 Guest93993 a. IPoIB is not upstartified - magic of /etc/network/interfaces take care of it b. static-network-up does not wait for IPoIB
22:25 semiosis Guest93993: ok
22:28 Guest93993 I am not certain about vbox or vmware
22:29 Guest93993 http://oraclemiddlewareblog.com/2012/02/13/ho​w-to-simulate-exalogic-for-training-purposes/ may help
22:31 MugginsO joined #gluster
22:38 chirino joined #gluster
22:46 meridion joined #gluster
22:46 Wizzup joined #gluster
22:50 meridion Does GlusterFS by any chance support using bricks as a caching back-end? So you could for instance use a local HDD as a caching brick for parts of the remote filesystem
22:50 meridion also.. the glusterfs channel seems kinda crowded, I wasn't allowed to join
22:51 fidevo joined #gluster
23:03 Matthaeus joined #gluster
23:06 edward1 joined #gluster
23:17 JoeJulian meridion: That's because it's not the official channel, this is.
23:17 meridion alright
23:17 JoeJulian And the answer to your first question is, no.
23:17 meridion That's a pity
23:18 meridion although I could always try my hands at writing a new translator for it
23:18 JoeJulian That would probably make you some friends.
23:22 Matthaeus joined #gluster
23:27 jcsp joined #gluster
23:27 semiosis there's a #glusterfs channel?  and I wasn't invited???
23:28 JoeJulian semiosis: glusterbot's the only one in there, and the channel has a population limit of 1.
23:28 semiosis hahaha nice
23:44 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary