Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 chirino joined #gluster
00:14 Peter2 joined #gluster
00:38 julim joined #gluster
00:40 julim joined #gluster
01:15 lyang0 joined #gluster
01:22 bala joined #gluster
01:30 ndk joined #gluster
01:45 xoritor JoeJulian, are you around?
01:48 haomaiwa_ joined #gluster
01:50 haomai___ joined #gluster
01:57 Peter2 joined #gluster
02:01 harish_ joined #gluster
02:04 haomaiwang joined #gluster
02:05 Peter3 joined #gluster
02:07 Peter2 joined #gluster
02:09 haomaiw__ joined #gluster
02:51 JoeJulian I'm here for a couple minutes, what's up?
02:52 JoeJulian And you're not... I should really try expanding nicks before I bother...
02:54 haomaiwang joined #gluster
02:57 JoeJulian http://goo.gl/2sZa09
02:58 * JoeJulian pokes glusterbot
02:58 JoeJulian https://plus.google.com/u/0/113457​525690313682370/posts/Sh5okpNn7Co
03:00 JoeJulian Ah, nice one Google...
03:00 JoeJulian @web title http://goo.gl/2sZa09
03:00 glusterbot JoeJulian: That URL appears to have no HTML title within the first 4KB.
03:01 JoeJulian Identifying choke points with real-world data: enterprise consumers on OpenStack Nova volumes on GlusterFS
03:02 haomaiw__ joined #gluster
03:06 haomaiwa_ joined #gluster
03:23 haomai___ joined #gluster
03:28 haomaiwa_ joined #gluster
03:29 Humble joined #gluster
03:44 haomaiw__ joined #gluster
03:45 sputnik13 joined #gluster
04:06 haomaiwa_ joined #gluster
04:10 Humble joined #gluster
04:25 kumar joined #gluster
04:29 cjhanks joined #gluster
04:37 MacWinner joined #gluster
04:41 brettnem_ joined #gluster
04:43 jobewan joined #gluster
05:04 tdondich joined #gluster
05:04 an joined #gluster
05:36 ricky-ti1 joined #gluster
06:26 rpatil joined #gluster
06:37 lalatenduM joined #gluster
06:43 harish_ joined #gluster
06:50 clyons|2 joined #gluster
06:54 ctria joined #gluster
07:04 shylesh__ joined #gluster
07:20 fsimonce joined #gluster
07:23 keytab joined #gluster
07:53 tdondich joined #gluster
07:53 haomaiwa_ joined #gluster
07:55 haomai___ joined #gluster
08:04 mhoungbo joined #gluster
08:04 sputnik13 joined #gluster
08:05 sputnik13 joined #gluster
08:19 harish_ joined #gluster
08:19 ricky-ticky joined #gluster
08:40 Slashman joined #gluster
08:41 the-me joined #gluster
08:45 liquidat joined #gluster
09:04 harish_ joined #gluster
09:11 vimal joined #gluster
09:24 sputnik13 joined #gluster
09:31 shubhendu joined #gluster
09:31 Humble joined #gluster
10:39 delhage joined #gluster
10:55 kkeithley joined #gluster
11:08 ndk joined #gluster
11:10 edward1 joined #gluster
11:30 LebedevRI joined #gluster
11:36 ira joined #gluster
11:37 Frank77 joined #gluster
11:39 baoboa joined #gluster
11:40 Nightlydev joined #gluster
11:42 Frank77 Hello. I would like advice about cache mechanims. I host a VM machine whose hard drive is stored on Gluster volume. There is a RDMB running in this VM. I would like to know if it's possible to make sure that any write cache is disable on this volume ? I found information about O_SYNC option but it doesn't sound clear to me. I'm using KVM with libgfapi implementation. VM HDD is already configured
11:42 Frank77 in writetrough mode. But I don't know if it's enough and how to test it.
11:51 bfoster joined #gluster
11:56 kkeithley_ left #gluster
12:04 bennyturns joined #gluster
12:05 kkeithley joined #gluster
12:06 calum_ joined #gluster
12:11 diegows joined #gluster
12:14 chirino joined #gluster
12:21 chirino joined #gluster
12:23 torbjorn__ Frank77: there's some documentation on the wiki with regards to that use-case: http://www.gluster.org/community/docum​entation/index.php/Virt-store-usecase
12:23 glusterbot Title: Virt-store-usecase - GlusterDocumentation (at www.gluster.org)
12:31 gEEbusT Hi guys, i'm seeing some errors when using gluster as a fuse mount for samba:
12:31 gEEbusT [2014-07-29 07:23:24.402893] W [client-rpc-fops.c:1232:client3_3_removexattr_cbk] 1-VOL-client-3: remote operation failed: No data available
12:31 gEEbusT [2014-07-29 07:23:24.403934] W [client-rpc-fops.c:1232:client3_3_removexattr_cbk] 1-VOL-client-2: remote operation failed: No data available
12:31 gEEbusT [2014-07-29 07:23:24.405142] W [fuse-bridge.c:1172:fuse_err_cbk] 0-glusterfs-fuse: 2694185: REMOVEXATTR() /VOL/example/file => -1 (No data available)
12:31 gEEbusT any ideas?
12:37 bene2 joined #gluster
12:42 Pupeno joined #gluster
12:49 xleo joined #gluster
12:50 kkeithley left #gluster
12:51 glusterbot New news from newglusterbugs: [Bug 1065632] glusterd: glusterd peer status failed with the connection failed error evenif glusterd is running <https://bugzilla.redhat.co​m/show_bug.cgi?id=1065632> || [Bug 1113460] peer probing fails on glusterfs-3.5.1 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1113460>
12:56 DV joined #gluster
12:56 kkeithley joined #gluster
13:00 ramteid joined #gluster
13:05 ccha3 joined #gluster
13:21 glusterbot New news from newglusterbugs: [Bug 1113460] after enabling quota, peer probing fails on glusterfs-3.5.1 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1113460>
13:38 bennyturns joined #gluster
13:42 bennyturns joined #gluster
13:43 ekuric joined #gluster
14:03 mhoungbo_ joined #gluster
14:05 mhoungbo_ joined #gluster
14:08 calum_ joined #gluster
14:09 mortuar joined #gluster
14:10 kkeithley left #gluster
14:10 kkeithley1 joined #gluster
14:15 plarsen joined #gluster
14:17 shubhendu joined #gluster
14:17 wushudoin joined #gluster
14:18 ctria joined #gluster
14:18 churnd joined #gluster
14:19 Rydekull joined #gluster
14:21 plarsen joined #gluster
14:29 ndk joined #gluster
14:32 nueces joined #gluster
14:33 richvdh joined #gluster
14:35 tdasilva joined #gluster
14:36 shubhendu joined #gluster
14:39 xleo joined #gluster
14:47 mhoungbo joined #gluster
14:48 ws2k3 joined #gluster
14:49 Humble joined #gluster
15:13 dtrainor_ joined #gluster
15:14 jobewan joined #gluster
15:15 lmickh joined #gluster
15:21 glusterbot New news from newglusterbugs: [Bug 1122443] Symlink mtime changes when rebalancing <https://bugzilla.redhat.co​m/show_bug.cgi?id=1122443>
15:22 mortuar joined #gluster
15:22 MacWinner joined #gluster
15:40 xoritor joined #gluster
15:41 xoritor bd-xlator is so cool!!!
15:42 Peter2 joined #gluster
15:42 xoritor now if you can only use it with libgfapi
15:43 tdondich I want to throw out random acronyms too.
15:43 xoritor https://github.com/gluster/glusterfs/​blob/master/doc/features/bd-xlator.md
15:43 glusterbot Title: glusterfs/doc/features/bd-xlator.md at master · gluster/glusterfs · GitHub (at github.com)
15:44 xoritor libgfapi should not be random here
15:44 xoritor https://github.com/gluster/glusterfs/​blob/master/doc/features/libgfapi.md
15:44 glusterbot Title: glusterfs/doc/features/libgfapi.md at master · gluster/glusterfs · GitHub (at github.com)
15:44 xoritor but just in case
15:45 xoritor anyone here that is working on the bd-xlator?
15:45 xoritor if so freaking good job!
15:45 xoritor wow
15:45 xoritor man this is AWESOME
15:46 xoritor bad ass does not begin to describe it
15:58 calum_ joined #gluster
15:58 daMaestro joined #gluster
16:06 mojibake joined #gluster
16:07 Humble joined #gluster
16:15 MacWinner joined #gluster
16:18 Humble joined #gluster
16:19 kumar joined #gluster
16:35 doo joined #gluster
16:47 Peter2 my brick crash when tried to run du against a particular directory
16:53 sputnik13 joined #gluster
17:06 Peter2 when i run the disk_usage_sync.sh from the extra from a GFS client, it crashed couple bricks on a replica 2 volume ....
17:11 Peter2 looking into the disk_usage_sync.sh, it's running du -n
17:11 Peter2 du -bc
17:11 Peter2 so seems like running du on a replica 2 volume caused the brick went down....
17:11 Peter2 and it's stil keep going down at this point
17:11 Peter2 i have to keep restarting it
17:11 XpineX joined #gluster
17:13 oxidane joined #gluster
17:18 bennyturns joined #gluster
17:29 XpineX_ joined #gluster
17:32 nueces joined #gluster
17:34 MacWinner joined #gluster
17:40 kumar joined #gluster
18:00 nueces joined #gluster
18:10 ira joined #gluster
18:13 chirino joined #gluster
18:34 Peter2 uploaded logs to the bug
18:34 Peter2 https://bugzilla.redhat.co​m/show_bug.cgi?id=1122732
18:34 glusterbot Bug 1122732: high, unspecified, ---, vshastry, NEW , remove volume hang glustefs
18:35 Humble joined #gluster
18:37 sspinner joined #gluster
18:38 Pupeno joined #gluster
18:41 Pupeno joined #gluster
18:47 recidive joined #gluster
18:55 Pupeno joined #gluster
18:56 Pupeno joined #gluster
18:57 oxidane joined #gluster
19:00 rotbeard joined #gluster
19:20 _Bryan_ joined #gluster
19:26 chirino joined #gluster
20:12 ricky-ti1 joined #gluster
20:14 chucky_z joined #gluster
20:15 chucky_z hello, excuse my bad question, but can you theoretically do an fdisk/mkfs/partition on an already existing partition to mount as a gluster brick?
20:19 xleo joined #gluster
20:22 xoritor chucky_z,  not really, no
20:23 xoritor chucky_z, you would have to have lvm setup to do something similar to that but it would not be the same thing you described
20:24 DanishMan joined #gluster
20:24 chucky_z xoritor: hm, there's already something setup like that so I'm just really curious...  I'm trying to determine how it was done correctly
20:25 gildub joined #gluster
20:25 chucky_z mount the same partition twice?
20:25 xoritor you create a logical volume then make a filesystem on it that is used to "back" the glusterfs
20:26 xoritor then glusterfs might possibly be mounted as glusterfs to send the data to ALL of the bricks
20:26 xoritor you have to ::  mount -t glusterfs host:/volumename /mountpoint
20:26 xoritor that tells glusterfs to send or get the data from all bricks
20:27 chucky_z ah yes it's mounted as glusterfs in the fstab
20:27 xoritor depends on how its setup
20:27 xoritor one will be a "real" filesystem  and one will be glusterfs
20:27 chucky_z alright, then I >can< take an existing filesystem and simply mount glusterfs on top of it?
20:28 xoritor well... not exactly like that
20:28 xoritor you take a filesystem and make it a brick in a volume
20:28 xoritor a volume can have one or more bricks
20:28 xoritor on one or more servers
20:29 xoritor in one of several different ways of distribution
20:29 chucky_z yes that's how it's currently setup, 2 bricks
20:29 xoritor it is best to start with a CLEAN FS and make bricks
20:29 xoritor then mount the glusterfs volume
20:30 xoritor then copy the data in
20:30 xoritor and let gluster write it to the new places
20:30 xoritor anything else will probably not distribute your data
20:31 xoritor ie... the links wont exist in the .gluster dir with the right hashes etc...
20:32 chucky_z when you refer to 'clean fs' do you mean a disk/raid with absolutely nothing on it?
20:32 xoritor just a filesystem
20:32 xoritor i use xfs
20:32 chucky_z ok
20:32 chucky_z i apologize my filesystem knowledge is... pretty lacking
20:32 xoritor its all good
20:32 xoritor gluster can confuse lots of people that know lots about filesystems
20:33 xoritor ;-)
20:33 chucky_z gluster has been setup previously in several locations and i've tasked myself with loading it on a new cluster
20:34 xoritor that is a great way to learn it
20:34 xoritor plan on breaking it a few times
20:34 xoritor :-D
20:35 chucky_z no can do, have to get it right on the first go
20:35 xoritor https://github.com/gluster​/glusterfs/tree/master/doc
20:35 glusterbot Title: glusterfs/doc at master · gluster/glusterfs · GitHub (at github.com)
20:35 chucky_z ok, i was looking at this page: http://www.gluster.org/community/document​ation/index.php/Getting_started_configure
20:35 glusterbot Title: Getting started configure - GlusterDocumentation (at www.gluster.org)
20:35 xoritor oh man... well one thing that is certain is that it RARELY is exactly right on the first go
20:36 xoritor thats ok... good place to go
20:36 chucky_z exactly right doesn't matter, just 'unbroken' is what i'm looking for
20:36 xoritor what version are you using?
20:36 chucky_z so essentially I already have /dev/sda1 mounted as /; ext4
20:36 chucky_z nothing at the moment
20:36 chucky_z i'm researching first
20:36 xoritor what distro?
20:36 chucky_z centos
20:36 chucky_z (6)
20:36 xoritor .what?
20:37 chucky_z centos 6.5...
20:37 chucky_z EL6 to be generic i guess?
20:38 xoritor ah so i think that would be 3.3 or 3.2 out of the box....
20:39 xoritor but if you use the epel repos it is at 3.5.1
20:39 xoritor http://download.gluster.org/pub/gluste​r/glusterfs/LATEST/RHEL/epel-6/x86_64/
20:39 glusterbot Title: Index of /pub/gluster/glusterfs/LATEST/RHEL/epel-6/x86_64 (at download.gluster.org)
20:39 chucky_z already have EPEL installed. :)
20:39 xoritor well the gluster repos for epel stuff
20:40 xoritor the repo file and pub key are here http://download.gluster.org/pub​/gluster/glusterfs/LATEST/RHEL/
20:40 glusterbot Title: Index of /pub/gluster/glusterfs/LATEST/RHEL (at download.gluster.org)
20:40 xoritor 3.5 is nice
20:41 chucky_z my EPEL version has 3.4... hm
20:41 xoritor 3.4 is ok
20:41 xoritor 3.5 is nice though
20:42 chucky_z if it runs i'll just install the RPM's
20:42 xoritor are you going to be doing quotas?
20:43 chucky_z no
20:44 xoritor yea then upgrade is pretty easy
20:45 stickyboy And 3.5.2 beta is out...
20:45 chucky_z so anyway, about making the actual brick
20:46 chucky_z since i already have the filesystem, can i just tell gluster 'ok make this folder the brick' and mount it inside an existing filesystem?
20:46 chucky_z e.g. mount '/gluster' as glusterfs inside of /
20:46 xoritor stickyboy, ooooh does it improve the bd-xlator?
20:46 systemonkey joined #gluster
20:47 chucky_z oh i think i got that backwards
20:47 xoritor you do a gluster create volume host1:/dir host2:/dir host3:/dir
20:47 xoritor where /dir is where the "real" fs is mounted
20:47 chucky_z ok yes, i want some goofy directory where nobody is going to ever visit. :)
20:47 xoritor you first have to probe all the peers
20:47 chucky_z alright
20:47 xoritor gluster peer probe
20:48 xoritor yea, you can use it to check things.... kinda like a read only thing
20:48 xoritor like a window into whats happening
20:49 xoritor an open windows you can reach into and take things and put things... but you should not
20:49 xoritor restrain yourself
20:49 xoritor self control!
20:49 xoritor if it is on lvm you can snapshot it and back it up just like normal
20:50 chucky_z lol
20:50 chucky_z i understand that part, we have the brick mounted as /var/www on some servers... this is what i'm looking at as a 'don't do this!!'
20:51 chucky_z our developers are constantly going in and mucking about causing brainsplits
20:51 chucky_z or split brain, not sure of the actual term
20:52 chucky_z so my example is... ok i do ... `gluster create volume host1:/gluster/dont/touch/me/no/really host2:/gluster/dont/touch/me/no/really` then I mount those to '/var/www/' that I can then say... do `touch some_file` inside of /var/www/ and have it appear on host1 and host2
20:53 xoritor split brain
20:53 xoritor yep
20:53 xoritor touch /var/www/index.html will show up in both  host1:/gluster/dont/touch/me/no/really host2:/gluster/dont/touch/me/no/really
20:54 xoritor if you did it all correctly
20:54 xoritor ;-)
20:54 chucky_z haha OK.
20:54 chucky_z and there should be no problem with mounting in other places right?
20:54 chucky_z im going to try this out with... say '/test/'
20:54 chucky_z so i mount host1:/gluster to /test on host1, and identical on host2
20:54 xoritor no
20:55 chucky_z i shouldnt have any issues unmounting and then remounting to /var/www later?
20:55 xoritor you can even mount it as nfs if you want to
20:55 chucky_z i already deal enough with nfs.... no thanks for now
20:55 xoritor no need to setup nfs as gluster does it out of the box
20:55 chucky_z cool
20:56 xoritor just mount host1:/volume
20:56 xoritor it mounts as  nfs
20:56 xoritor mount -t glusterfs host1:/volume it mounts as glusterfs
20:56 xoritor you have to actually tell it to mount as glusterfs
20:57 chucky_z alright
20:57 chucky_z gluster i've found is much easier to diagnose... it's very clear when there's an issue heh
20:57 xoritor when you create the volume you will give it the name
20:57 xoritor usually
20:57 stickyboy 1/exit
20:57 xoritor until it is not
20:57 stickyboy #fail.
20:57 xoritor ;-)
20:57 xoritor when it is not it really is NOT
20:58 xoritor stickyboy, you are making me sticky!
20:59 chucky_z alright cool this should be a decently not-impossible setup that i can test without bringing anything down then
21:00 chucky_z thank you very much for your help xoritor
21:00 xoritor good luck
21:00 xoritor i didnt do much
21:00 xoritor hope it goes well for you
21:06 chucky_z you did a good job explaining things. :)
21:14 Pupeno joined #gluster
21:17 Pupeno joined #gluster
21:29 Pupeno joined #gluster
21:43 ndk joined #gluster
22:00 stickyboy joined #gluster
22:26 DV joined #gluster
23:08 ira joined #gluster
23:31 plarsen joined #gluster
23:46 _Bryan_ joined #gluster
23:46 bene2 joined #gluster
23:55 edong23 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary