Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 jmarley joined #gluster
00:16 jmarley joined #gluster
00:35 yinyin joined #gluster
00:39 LessSeen joined #gluster
00:42 LessSeen_ joined #gluster
00:52 Honghui_ joined #gluster
00:56 LessSeen joined #gluster
00:57 XpineX_ joined #gluster
01:01 Durzo im getting the dreaded "volume create failed <xxx> is already part of a brick" but it applies to an LVM VolumeGroup, how the hell do i remove the attributes from a VG?
01:04 gdubreui joined #gluster
01:12 JoseBravo joined #gluster
01:14 Durzo bleh, ended up removing the VG and recreating it
01:21 johnny5_ joined #gluster
01:22 johnny5_ Good evening folks. I've got an odd one. 6 server distributed replicated in 2s. A bunch of stuff in heal and heal failed but even after a heal full the list doesn't change. Running 3.4. I'm not even sure where to start.
01:22 Honghui joined #gluster
01:23 bala joined #gluster
01:24 vpshastry joined #gluster
01:27 glusterbot New news from newglusterbugs: [Bug 1089414] Need support for handle based Ops to fetch/modify extended attributes of a file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1089414>
02:02 dbruhn joined #gluster
02:04 harish joined #gluster
02:05 Honghui joined #gluster
02:09 LessSeen joined #gluster
02:11 Humble joined #gluster
02:19 gdubreui joined #gluster
02:43 hagarth joined #gluster
02:45 Honghui joined #gluster
02:55 saurabh joined #gluster
02:57 rastar joined #gluster
02:58 Honghui__ joined #gluster
03:07 yinyin_ joined #gluster
03:09 haomaiwa_ joined #gluster
03:18 kanagaraj joined #gluster
03:46 RobertLaptop joined #gluster
03:49 Honghui joined #gluster
03:55 hchiramm_ joined #gluster
04:00 Durzo Anyone know why im getting "All subvolumes are down" when i try to qemu-img create through gluster/libgfapi? full log here: http://paste.ubuntu.com/7356922/
04:00 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
04:01 bala joined #gluster
04:04 kumar joined #gluster
04:04 mjsmith2 joined #gluster
04:06 itisravi joined #gluster
04:10 haomaiwa_ joined #gluster
04:15 Honghui joined #gluster
04:17 yinyin_ joined #gluster
04:17 ppai joined #gluster
04:18 shubhendu joined #gluster
04:21 dusmant joined #gluster
04:22 haomaiw__ joined #gluster
04:22 bharata-rao joined #gluster
04:31 ngoswami joined #gluster
04:38 atinmu joined #gluster
04:39 theron joined #gluster
04:43 ndarshan joined #gluster
04:48 ravindran1 joined #gluster
04:51 hagarth joined #gluster
04:52 prasanthp joined #gluster
04:52 Honghui joined #gluster
05:01 deepakcs joined #gluster
05:01 benjamin_____ joined #gluster
05:10 Durzo bharata-rao, could you ping me when you are around.. i need some BD xlator help please :/
05:11 rjoseph joined #gluster
05:11 bharata-rao Durzo, I am around, but I will have to remember though, but anyway ask
05:12 Durzo bharata-rao, i have followed your blog and set up a 2 replica set LVM BD and am trying to use it with qemu. when i run qemu-img i get an error "All subvolumes are down" - a file is created inside the meta LV but there is no LV created for it
05:12 theron joined #gluster
05:13 bharata-rao Durzo, I guess you have set xattr after creating the file ?
05:13 Durzo i understand your blog advises it is a "2 step process". im guessing qemu is not setting the correct xattr in order to turn it into an LV - is this not possible?
05:13 Durzo if this is something i have to manually do then its probably not the solution im looking for
05:13 meghanam joined #gluster
05:14 meghanam_ joined #gluster
05:14 aravindavk joined #gluster
05:14 Durzo we use virt-manager ontop of libvirt to create volumes and having to manually turn the file into an LV isnt going to work for us
05:14 bharata-rao Durzo, qemu doesn't yet understand bd volume, you will have to manually create the image and then use it as qemu image
05:14 Durzo ok thanks
05:16 ppai joined #gluster
05:17 bharata-rao Durzo, There was work planned for exporting the volume capabilities via libgfapi so that clients like QEMU can know more about the type of xlator (like BD) and take special actions (like setxattr), but I don't think Mohan (who worte BD) got around to doing that
05:18 ravindran1 joined #gluster
05:20 theron joined #gluster
05:26 nshaikh joined #gluster
05:27 davinder2 joined #gluster
05:27 kdhananjay joined #gluster
05:36 ppai joined #gluster
05:36 nishanth joined #gluster
05:37 nthomas joined #gluster
05:39 theron joined #gluster
05:39 DV joined #gluster
05:48 surabhi joined #gluster
05:51 dusmant joined #gluster
05:53 pk joined #gluster
05:55 vpshastry joined #gluster
05:58 glusterbot New news from newglusterbugs: [Bug 921215] Cannot create volumes with a . in the name <https://bugzilla.redhat.com/show_bug.cgi?id=921215>
06:00 vpshastry joined #gluster
06:11 raghu joined #gluster
06:25 lalatenduM joined #gluster
06:28 glusterbot New news from newglusterbugs: [Bug 1021998] nfs mount via symbolic link does not work <https://bugzilla.redhat.co​m/show_bug.cgi?id=1021998>
06:34 davinder3 joined #gluster
06:34 rahulcs joined #gluster
06:35 psharma joined #gluster
06:36 haomaiwa_ joined #gluster
06:37 purpleidea JoeJulian: burning the midnight hack :)
06:38 pk purpleidea: ping
06:38 glusterbot pk: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
06:38 purpleidea pk: hey!
06:38 pk purpleidea: what is the time now?
06:38 purpleidea 2:38am :)
06:39 glusterbot New news from resolvedglusterbugs: [Bug 1087177] Gluster module (purpleidea) fails on mkfs exec command <https://bugzilla.redhat.co​m/show_bug.cgi?id=1087177>
06:39 purpleidea pk: are you sleeping regularly again?
06:39 pk purpleidea: nope :-)
06:39 ppai joined #gluster
06:39 pk purpleidea: All the others have already re-adjusted... I think I will also be in some time
06:40 pk purpleidea: What are you working on at this moment?
06:40 purpleidea pk: porting puppet-gluster to be multi-distro compatible
06:40 purpleidea pk i have a photo to send you. pm me the email you want it at
06:49 dusmant joined #gluster
06:52 Pavid7 joined #gluster
06:53 Honghui joined #gluster
06:58 jiffe98 joined #gluster
07:02 haomai___ joined #gluster
07:05 edward1 joined #gluster
07:07 ctria joined #gluster
07:09 ktosiek joined #gluster
07:15 eseyman joined #gluster
07:20 keytab joined #gluster
07:22 dusmant joined #gluster
07:23 ricky-ticky joined #gluster
07:28 rahulcs joined #gluster
07:31 MrAbaddon joined #gluster
07:33 fsimonce joined #gluster
07:35 purpleidea does anyone know if the common gluster names such as folders, deb package names, and so on, are basically the same in ubuntu and debian ?
07:35 purpleidea semiosis: ^
07:36 Durzo they should be
07:52 DV__ joined #gluster
07:57 giannello joined #gluster
07:58 pithagora joined #gluster
07:59 pithagora hello all. im trying to remove a brick from a replicated pool. here is my volume info https://gist.github.com/anonymous/11393490  and here is what i get https://gist.github.com/anonymous/11393496
07:59 glusterbot Title: gist:11393490 (at gist.github.com)
08:00 liquidat joined #gluster
08:01 dewey_ joined #gluster
08:04 ngoswami joined #gluster
08:04 andreask joined #gluster
08:07 davent left #gluster
08:13 pithagora if i indicate the replica 1 option i get  https://gist.github.com/anonymous/11393739
08:13 glusterbot Title: gist:11393739 (at gist.github.com)
08:13 pithagora please help with the right command :)
08:14 Honghui joined #gluster
08:23 psharma joined #gluster
08:25 rahulcs joined #gluster
08:27 ravindran2 joined #gluster
08:27 hybrid512 joined #gluster
08:28 ndarshan joined #gluster
08:32 harish joined #gluster
08:33 hybrid512 joined #gluster
08:34 bharata_ joined #gluster
08:43 ron-slc joined #gluster
08:46 glafouille joined #gluster
08:49 hagarth joined #gluster
08:50 saravanakumar joined #gluster
08:51 hybrid512 joined #gluster
08:52 hybrid512 joined #gluster
08:53 sputnik13 joined #gluster
08:57 hybrid512 joined #gluster
09:03 vpshastry joined #gluster
09:03 ravindran1 joined #gluster
09:13 rahulcs joined #gluster
09:17 zerodeux joined #gluster
09:27 pithagora joined #gluster
09:29 d-fence joined #gluster
09:29 glusterbot New news from newglusterbugs: [Bug 1092414] Disable NFS by default <https://bugzilla.redhat.co​m/show_bug.cgi?id=1092414>
09:35 haomaiwang joined #gluster
09:40 ctria joined #gluster
09:49 rahulcs_ joined #gluster
09:50 jmarley joined #gluster
09:50 jmarley joined #gluster
09:56 rahulcs joined #gluster
09:56 dusmant joined #gluster
09:57 haomaiwa_ joined #gluster
09:58 ackjewt Hi, is it possible to disable the NFS server globally in gluster? E.g not per volume.
10:02 d-fence joined #gluster
10:03 ndevos ackjewt: no, not that I know of
10:04 ackjewt ndevos: Ok, thanks for the answer
10:05 edward1 joined #gluster
10:18 purpleidea ackjewt: puppet-gluster ,,(puppet) could do this if you're interested.
10:18 glusterbot ackjewt: https://github.com/purpleidea/puppet-gluster
10:26 hagarth joined #gluster
10:29 glusterbot New news from newglusterbugs: [Bug 1092433] DHT + Snapshot :- If snapshot is taken when Directory is created only on hashed sub-vol; On restoring that snapshot Directory is not listed on mount point and lookup on parent is not healing <https://bugzilla.redhat.co​m/show_bug.cgi?id=1092433>
10:35 ira joined #gluster
10:38 ctria joined #gluster
10:39 pithagora joined #gluster
10:42 xiu joined #gluster
10:57 RameshN joined #gluster
11:00 hagarth joined #gluster
11:10 kkeithley1 joined #gluster
11:11 Alpinist joined #gluster
11:15 dusmant joined #gluster
11:18 Durzo anyone know why gluster keeps returning "All subvolumes are down. Going offline until atleast one of them comes back up." on file operations even though the file op succeeds ?
11:19 kumar joined #gluster
11:35 aravindavk joined #gluster
11:37 rahulcs joined #gluster
11:39 d-fence joined #gluster
11:39 DV joined #gluster
11:43 haomaiwa_ joined #gluster
11:45 mkzero joined #gluster
11:45 zorgan joined #gluster
11:49 zorgan Hello. Is there any way to find out real file location in glusterfs? I'm using glusterfs 3.4.2
11:54 Alpinist joined #gluster
11:54 gmcwhistler joined #gluster
11:56 hagarth joined #gluster
11:59 cvdyoung Hello.  I have 2 servers with a single distributed volume "home", and I am trying to set volume options like auth.allow.  I get an error "volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again"
12:00 glusterbot New news from newglusterbugs: [Bug 1092183] GlusterFS and Systemd - start/stop is broken <https://bugzilla.redhat.co​m/show_bug.cgi?id=1092183>
12:07 pk left #gluster
12:07 andreask joined #gluster
12:09 dusmant joined #gluster
12:12 MrAbaddon joined #gluster
12:16 jmarley joined #gluster
12:16 jmarley joined #gluster
12:18 lmickh joined #gluster
12:21 qdk joined #gluster
12:22 d-fence joined #gluster
12:28 cvdyoung Hello.  I have 2 servers with a single distributed volume "home", and I am trying to set volume options like auth.allow.  I get an error "volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again"
12:29 itisravi joined #gluster
12:30 williamj_ joined #gluster
12:30 williamj__ joined #gluster
12:31 williamj_ Hi all,  need some help please.
12:31 williamj_ Can one delete a gluster volume with out loosing the data
12:33 d-fence joined #gluster
12:34 MrAbaddon joined #gluster
12:41 saurabh joined #gluster
12:44 kkeithley_ deleting a volume does not touch the data on the bricks
12:45 sroy_ joined #gluster
12:49 Durzo anyone have any success with qemu and glusterfs via libgfapi? im having nothing but miserable failures in everything i try
12:50 samppah Durzo: what kind of failures you are experiencing?
12:50 Durzo samppah, qemu-img commands succees but return "All subvolumes are down. Going offline until atleast one of them comes back up."
12:50 samppah i have used it successfully but i switched to fuse after experiencing couple of strange crashes
12:51 Durzo and i cannot start a guest with a glusterfs disk, virsh simply hangs
12:51 Durzo im using a fresh install of trusty 14.04 lts with qemu 2.0.0+glusterfs compiled in
12:52 Durzo glusterfs 3.5.0
12:52 Durzo the gluster brick is the on the same box as the kvm host
12:53 samppah ok, i'm not that familiar with ubuntu
12:53 samppah can you send output of gluster volume info to pastie.org?
12:54 coredump joined #gluster
12:54 ctria joined #gluster
12:55 Durzo sampahh: http://paste.ubuntu.com/7359233/
12:55 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
12:57 samppah Durzo: okay, anything in log files?
12:57 Durzo virsh hangs when i try to start a guest with a disk definition from glusterfs (libgfapi). nothing appears in gluster log files at this point and an strace is totally silent as well.. its like something is blocking it
12:57 Durzo samppah, it appears as though the file operation succeeds, i can see the qcow2 file via a fuse mount and it is replicated over to the second gluster brick just fine
12:58 Durzo absolutely 0 in the logs
12:58 Durzo other than the successful client mounts
12:59 Durzo about to give it a try over qemu+fuse
13:00 Durzo samppah, what version of gluster did you have (minimal) success with it?
13:02 samppah Durzo: gluster 3.4.2 and 3.4.3
13:02 samppah using centos 6.5
13:02 Durzo ok
13:02 Durzo could be worth me downgrading to 3.4.3 i guess
13:05 Durzo bloody thing works perfectly over a fuse mount
13:06 gdubreui joined #gluster
13:08 d-fence joined #gluster
13:10 JoseBravoHome joined #gluster
13:12 japuzzo joined #gluster
13:13 dusmant joined #gluster
13:13 pdrakeweb joined #gluster
13:16 Durzo well its official, gfapi support sucks nuts in qemu
13:20 rahulcs joined #gluster
13:20 MrAbaddon joined #gluster
13:21 lalatenduM Durzo, have you seen this http://www.gluster.org/community/documenta​tion/index.php/Libgfapi_with_qemu_libvirt
13:21 glusterbot Title: Libgfapi with qemu libvirt - GlusterDocumentation (at www.gluster.org)
13:27 dbruhn joined #gluster
13:30 mjsmith2 joined #gluster
13:32 dewey joined #gluster
13:42 shubhendu joined #gluster
13:46 lpabon joined #gluster
13:49 Durzo lalatenduM, that was one of the guides i went off, yes
13:50 gmcwhistler joined #gluster
13:51 Durzo lalatenduM, as mentioned above qemu-image worked but threw an error. all virsh commands hung the process forever when referencing a gluster file
13:52 lalatenduM Durzo, hope your selinux and iptables are not blocking anything
13:53 Durzo lalatenduM, ubuntu 14.04 lts server. no iptables and no selinux
13:53 lalatenduM Durzo, which glusterfs version you are using?
13:53 Durzo 3.5.0
13:53 Durzo qemu 2.0.0, libvirt 1.2.2
13:54 lalatenduM Durzo, hmm, versions are alright
13:54 Durzo its working via fuse mount... but thats kinda not what i wanted
13:54 lalatenduM Durzo, I know some people in community use libgfapi + qemu,
13:54 Durzo well it failed horribly for me
13:55 lalatenduM Durzo, you done required tuning i.e. volume set commands
13:55 lalatenduM ?
13:55 Durzo i threw 2 days and several thousands of dollars into it.. i checked and double checked everything
13:55 lalatenduM s/you/have you/
13:55 glusterbot What lalatenduM meant to say was: Durzo, have you done required tuning i.e. volume set commands
13:55 Durzo yes
13:58 sas joined #gluster
13:58 rahulcs joined #gluster
13:59 mjsmith2 joined #gluster
14:01 bet_ joined #gluster
14:01 lalatenduM Durzo, I remember sas seeing similar issue, sas ^^
14:02 tdasilva joined #gluster
14:02 bennyturns joined #gluster
14:02 lalatenduM sas, you can read the logs at http://irclog.perlgeek.de/gluster-dev/
14:02 glusterbot Title: IRC logs - Index for #gluster-dev (at irclog.perlgeek.de)
14:02 lalatenduM sas, oops wrong link, here is the correct one http://irclog.perlgeek.de/gluster/2014-04-29
14:02 glusterbot Title: IRC log for #gluster, 2014-04-29 (at irclog.perlgeek.de)
14:03 lalatenduM Durzo, and you have restarted the volume after the volume set commands ? just wanted to make sure you have not missed anything
14:04 Durzo yes
14:04 Durzo i have restarted glusterd on both bricks
14:04 lalatenduM Durzo, u mean you have restarted the volume and glusterd right?
14:04 sas lalatenduM, reading
14:04 Durzo my /etc/glusterfs/glusterd.vol contains "option rpc-auth-allow-insecure on"
14:05 Durzo i have rebooted both gluster brick hosts.. so everything has gotten a good restart
14:05 ndevos Durzo: you also need to restart the glusterfsd processes for the volume, I think (maybe that has been fixed already?)
14:05 lalatenduM Durzo, hmm
14:05 ndevos well, a reboot should do it :)
14:05 Durzo the 3.5.0 ubuntu init scripts completely kill all gluster proccesses anyway
14:05 Durzo (quite scarily)
14:06 lalatenduM Durzo, yup,
14:07 DV joined #gluster
14:07 benjamin_____ joined #gluster
14:08 lalatenduM Durzo, currently "qemu-img create" is failing through libgfapi right?
14:08 Durzo no it works but throws an error. see http://paste.ubuntu.com/7359233/
14:08 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:09 Durzo what is flat out failing is trying to start a domain through virsh or virt-manager or even manually by running qemu-system-x86.. the process just hangs indefinitely and strace shows nothing.. its blocked by something
14:09 sas Durzo, yes, tat is expected and fixed with this bug - https://bugzilla.redhat.co​m/show_bug.cgi?id=1054696
14:09 lalatenduM Durzo, got it, qemu-img create returning error through libgfapi , what are the glusterfs packages installed on the server?
14:09 glusterbot Bug 1054696: low, low, ---, rwheeler, CLOSED UPSTREAM, Got debug message in terminal while qemu-img creating qcow2 image
14:10 lalatenduM sas, do you know if the fix is in 3.5 branch?
14:10 sas lalatenduM, no this is not in 3.5
14:12 kanagaraj joined #gluster
14:12 lalatenduM sas, the errors in the bug are different than what Durzo is getting, isn't it?
14:12 Durzo lalatenduM, glusterfs-client, glusterfs-common and glusterfs-server all version 3.5.0-ubuntu1~trusty1 from semisos ppa
14:12 sas Durzo, I too faced such a hang, so I did few changes, 1. added ownership to the volume ( uid:gid = 107/107) 2. Added option to volume - allow-insecure on 3. Restarted volume 4. Added option, rpc-auth-allow-insecure on to glusterd vol file and restarted glusterd
14:13 sas Durzo, doing all the above, I could overcome from a hanf
14:13 Durzo sas, i have all of those things set
14:13 Durzo im happy to give my servers another reboot
14:14 sas lalatenduM, Durzo, is there any pastebins or logs ?
14:14 lalatenduM ndevos, sas do we need glusterfs-api pkg installed too?
14:14 Durzo sas, several.. what in particular?
14:15 Durzo there is no glusterfs-api in ubuntu (from semiosis ppa anyway)
14:15 sas lalatenduM, yes, of-course, glusterfs-api is required
14:15 sas Durzo, is it ?
14:16 pithagora hey guys. im getting Incorrect brick when trying to remove a brick from volume. please help. what im doing wrong? https://gist.github.com/anonymous/11401663
14:16 glusterbot Title: gist:11401663 (at gist.github.com)
14:16 Durzo glusterfs-common contains /usr/lib/pkgconfig/glusterfs-api.pc
14:16 lalatenduM Durzo, plz point us to semiosis ppa
14:16 Durzo so probably bundled with that package?
14:16 lalatenduM Durzo, sas yeah I expect so
14:16 Durzo https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.5
14:16 glusterbot Title: ubuntu-glusterfs-3.5 : semiosis (at launchpad.net)
14:18 lalatenduM pithagora, Just try "gluster volume remove-brick file-storage storage1:/storage start"
14:18 sas Durzo, can you fuse mount the volume and change the ownership of the image to the relevant ones, 107:107 and retry it ?
14:19 pithagora lalatenduM: i get: Removing bricks from replicate configuration is not allowed without reducing replica count explicitly.
14:19 P0w3r3d joined #gluster
14:19 lalatenduM pithagora, yeah right, I missed that you have replica 2
14:20 pithagora lalatenduM: root@storage2 / #  gluster volume remove-brick file-storage replica 1 storage1:/storage start
14:20 pithagora Incorrect brick storage1:/storage for volume file-storage
14:20 Durzo sas, have already done that.. am just restarting gluster procs now
14:21 sas Durzo, ohh ok, http://paste.ubuntu.com/7359233/, showed that the ownership was root:root
14:21 Durzo that was some hours ago now
14:21 Durzo sorry
14:22 sas Durzo, oh ok !!
14:24 Durzo and frozen... dumping into paste now
14:25 lalatenduM pithagora, may be it is not allowed , are you u trying gluster 3.5?
14:26 sas lalatenduM, yes, min requisite for replica vol is 2 bricks
14:26 cvdyoung Hi.  I have setup a distributed volume called home, and then I expanded it by adding another brick from a second server.  When I run a job to create 10 50G files, I am seeing the traffic only going to a single server, even though I am using DNS RR.  Any ideas what I have set wrong?  Thank you!
14:26 sas lalatenduM, you can convert single brick-ed dist vol to replica vol with 2 bricks and not the reverse, as I know
14:26 lalatenduM pithagora, let know what exactly you are ur objective , we can find workarounds
14:26 pithagora lalatenduM: glusterfs 3.3.1 built on Oct 22 2012 07:54:22
14:27 sas Durzo, any clues ?
14:27 lalatenduM sas, ok, I thought it was possible in older glusterfs but not in the latest
14:27 pithagora lalatenduM: i have a 2 node cluster. i have to shutdown one server and leave only one. in my examples, as you can see, i want to take out storage1
14:28 lalatenduM pithagora, for that you dont have to do remove brick, just shutdown the server, when it will came up self-heal will occur
14:29 Durzo sas, lalatenduM http://paste.ubuntu.com/7359693/
14:29 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:29 pithagora lalatenduM: it doesn't have to come up. i have to close it and leave only one.
14:31 diegows joined #gluster
14:31 sas pithagora, but the volume would become distribute volume without fault-tolerance, is that ok for you ?
14:32 pithagora sas: i just have to keep the data available for a while. fault - tolerance is not required.
14:32 churnd oh that was weird... i updated gluster & it changed my mount point permissions
14:33 churnd well ownership actually
14:33 lalatenduM pithagora, you can create a new volume and migrate the data by just copying
14:33 sas pithagora, in that case, you can try the following : "gluster volume remove-brick replica 1 <vol-name> <brick1> force" could help
14:33 mjsmith2 joined #gluster
14:33 lalatenduM churnd, which version of glusterfs you updated?
14:33 sas pithagora, remember the volume would turn in to distribute
14:33 churnd previous version to the latest
14:33 churnd 3.4.1 to 3.5.0 iirc
14:34 lalatenduM churnd, looks like a issue, plz file a @bug
14:35 lalatenduM @bug
14:35 glusterbot lalatenduM: (bug <bug_id> [<bug_ids>]) -- Reports the details of the bugs with the listed ids to this channel. Accepts bug aliases as well as numeric ids. Your list can be separated by spaces, commas, and the word "and" if you want.
14:36 Durzo any ideas guys?
14:36 ndevos Durzo: I think libvirt creates log-files for each VM under /var/log/libvirt/qemy/
14:36 sas Durzo, looking at it only
14:36 Durzo ndevos, log is empty
14:36 ndevos wow
14:37 Durzo like i said it just hangs immediately, does not launch, does not attempt to do anything
14:37 rwheeler joined #gluster
14:38 pithagora lalatenduM: 3.6 TB of data, i don't think i want to copy from one volume to an other, at least because i have no space :)
14:38 lalatenduM Durzo, ndevos sas I hope the issue is not in QEMU 2.0
14:38 pithagora sas: root@storage2 / #  gluster volume remove-brick file-storage replica 1 storage1:/storage force
14:38 pithagora Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
14:38 pithagora Incorrect brick storage1:/storage for volume file-storage
14:39 pithagora the results i get with force
14:39 ndevos lalatenduM: I do not hope so too!
14:39 ndevos Durzo: do you see any attemps to access the bricks in the brick logs?
14:39 Durzo ndevos, clearing my gluster logs and trying again
14:39 lalatenduM pithagora, lets ask ndevos he might give some workaround
14:40 churnd where do i file a bug
14:40 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
14:40 lalatenduM Durzo, I dont see any irregularities , anyway I am not an expert with qemu and glusterfs ;(
14:41 zerodeux hello gluster hackers
14:41 zerodeux I have a question, something simple and maybe intuitive but not written in the doc about profiling
14:41 ndevos pithagora: is the brick process at storage1:/storage still running?
14:41 pithagora yes
14:41 zerodeux does the number showed by 'volume top' are those accumulated during the sampling period between a volume profile start and stop ?
14:42 zerodeux I meant 'numbers' of course
14:43 Durzo i do get this: W [graph.c:329:_log_if_unknown_option] 0-nfs-server: option 'rpc-auth-allow-insecure' is not recognized
14:44 ndevos pithagora: and storage1:/storage is the brick as shown in 'gluster volume info'?
14:44 sas pithagora, yes, the data would be retained in other brick, see to that there is no self-heal is happening before you take it out
14:45 sas pithagora, stop all I/O from mount, before doing this remove-brick operation
14:45 pithagora ndevos: yes, i see same bricks on both nodes
14:47 ndevos Durzo: that log is related to nfs, we dont care about it now, do you have something in the logs of glusterd or in the logs of the bricks?
14:47 sas pithagora, when you convert, the replicate volume ( with 2 bricks ) to distribute volume containing single brick, the data is retained in another brick which becomes dist volume
14:48 sas pithagora, does that help ?
14:48 Durzo nothing of interest, i can paste them if youd like
14:48 pithagora sas: i've stopped all io from mount on storage1 server, stopped glusterfs server and killed all other glusterfs operations and i get the same
14:48 failshell joined #gluster
14:48 coredump joined #gluster
14:49 Durzo ndevos, http://paste.ubuntu.com/7359812/
14:49 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:51 sas pithagora, remove-brick cli is more closely associated with distributed and distributed-replicate volume, in those cases removing the brick, means losing the data in that brick
14:52 sas pithagora, but that was not the case with replicate volume, as it has same copies of data on both the bricks
14:53 sas pithagora, so your removal of one of the brick in replicate volume, doesn't mean a data loss
14:53 sas pithagora, on safer side can you back-up the data, before remove-brick operation
14:54 ndevos Durzo: that looks okay, the last log entries there show that some clients are connecting - client_uid=catau-syd-kvm02-30360-<date> where 30360 is the PID of the client on server catau-syd-kvm02
14:54 jobewan joined #gluster
14:54 Durzo kvm02 is the second brick in the replica
14:55 Durzo it does not have a fuse mount open
14:55 ndevos Durzo: might be the nfs-server, a fuse mount or gluster-self-heal-daemon
14:55 pithagora sas: no, its too much data,  3.6 TB, the customer doesn't have such space available and we have no time to do it
14:55 Durzo probably SHD
14:56 ndevos Durzo: you can check the other client_uid's and verify that one of them was a qemu-img
14:56 Durzo none of them were
14:56 Durzo it would have been 127.0.0.1 or kvm01
14:56 coredump so, I have some volumes mounted across many machines. The UID/GID on them for the file's owners are the same. Do I need to match it on the gluster servers too?
14:57 lalatenduM pithagora, ndevos , you can delete the volume and recreate one , deleting might sound dangerous but volume delete does not delete data
14:58 sas lalatenduM, in that case, data is retained in the brick, then we can go for "remove-brick" option, isn't it ?
14:58 Durzo ndevos, none of the pids were qemu either
14:59 lalatenduM pithagora, as of now you can just shutdown the server you want, then later when you have time create a new volume and migrate the data
14:59 lalatenduM sas, yes, but remove is not working here
14:59 sas lalatenduM, no remove-brick will work good man
15:00 pithagora lalatenduM, ndevos, sas,  - can i be sure that gluster will continue to work correctly if i just stopped the glusterfs server on storage1? i need it for short time only.
15:00 sas lalatenduM, remove-brick cli throws data-loss warning, as it usually do
15:00 glusterbot New news from newglusterbugs: [Bug 1092601] various inconsistencies when snapshots are taken. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1092601> || [Bug 1092606] Same dentry stored as a file and directory on different subvolumes when snapshots are restored <https://bugzilla.redhat.co​m/show_bug.cgi?id=1092606>
15:01 lalatenduM pithagora, yes thats why replicated volumes are famous :), But for you satisfaction test it first, if it does not work we can think of something else
15:02 lalatenduM sas, it should be remove-brick start
15:02 sas lalatenduM, nope that is for dist volume, so data gets migrated to other brick
15:02 lalatenduM sas, agree
15:03 sas lalatenduM, in this case there is a replicate volume with 2 bricks, so the usage could be : gluster volume remove-brick <vol-name> replica 1 <brick> force
15:03 kaptk2 joined #gluster
15:03 ndevos Durzo: there is a diagnostics.brick-log-level (or something like that), you could enable DEBUG to see if a qemu-img process tries to connect and gets rejected
15:04 ndevos Durzo: for glusterd, you need to pass --log-level=DEBUG on the commandline, no idea how to do that in Ubuntu - but you can kill the process and start it manually
15:04 lalatenduM sas, I think u r right, pithagora can plz try the command sas suggested
15:07 Durzo ndevos, setting brick logs to DEBUG and then re-running 'virsh start mydomain' gives 0 new logs
15:08 Durzo ndevos, ditto with glusterd --log-level=DEBUG
15:08 pithagora sas, lalatenduM, root@storage2 ~ #  gluster volume remove-brick file-storage replica 1 storage1:/storage force
15:08 pithagora Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
15:08 pithagora Incorrect brick storage1:/storage for volume file-storage
15:09 Durzo its like qemu doesnt even get up to the part where it tries to mount gluster.. it just hangs somewhere before
15:09 ndevos Durzo: try qemu-img, that is a little simpler to debug (although not much)
15:10 theron joined #gluster
15:10 ndevos Durzo: could it be that there is a (selinux replacement) ...armour... <something> restriction?
15:11 Durzo apparmor.. its possible.. ubuntu 14.04 now enforces apparmor and you cant turn it off (that i know of)
15:11 ndevos right, that, I have zero experience with it, but it may have some logs too?
15:11 Durzo looking now
15:12 Durzo upon trying to start the domain through virsh i get: kernel: [476628.502616] type=1400 audit(1398784342.454:131): apparmor="STATUS" operation="profile_load" profile="unconfined" name="libvirt-144d1e0b-da0​1-c379-402a-71179e793e8b" pid=18956 comm="apparmor_parser"
15:12 ndk joined #gluster
15:12 Durzo profile=unconfined should mean its free to do whatever
15:15 ndevos but, in this case it is not libvirt doing the gluster-communication, it is either wemu-img or qemu - I have no idea if apparmor inherits permissions or not...
15:18 sas joined #gluster
15:18 sas pithagora, missed ur chat !!, does that work ?
15:20 pithagora sas: no, im getting the same Incorrect brick storage1:/storage for volume file-storage
15:21 edong23 joined #gluster
15:22 ndevos Durzo: I assume you checked any firewalls, incoming *and* outgoing?
15:22 ndevos Durzo: also check that the gluster processes listen on 0.0.0.0 and not on a specific IP, that can cause issues on occasion too...
15:23 * ndevos isnt sure you can even configure that, but rather check never the less
15:25 Durzo ndevos, iptables has 0 rules, all chains have policy ACCEPT
15:25 Durzo ndevos, fuse mounts work perfectly btw
15:26 haomai___ joined #gluster
15:26 ndevos Durzo: I expected as much..,
15:31 glusterbot New news from newglusterbugs: [Bug 1092620] Upgraded from 3.4.3 to 3.5.0, and ownership on mount point changed. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1092620>
15:31 jag3773 joined #gluster
15:33 lalatenduM pithagora, I think you should just shutdown the server
15:34 mjsmith2 joined #gluster
15:34 daMaestro joined #gluster
15:36 davinder3 joined #gluster
15:38 pithagora lalatenduM: thanks. this is what i did :)
15:44 somepoortech How can I tell what bricks are replica's and which are distributed?
15:46 Durzo oer.. i rebooted one of the bricks and its now showing that it is not online in a gluster volume info
15:48 Durzo ohwell sleep time...
15:50 dbruhn somepoortech, what do you get if you run gluster volume info
15:51 somepoortech http://pastebin.com/CNnzH5Jr
15:51 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:52 dbruhn somepoortech, Brick1: 10.252.3.254:/data/brick1 and Brick2: 10.252.3.252:/data/brick3 are a replica group
15:52 dbruhn and the other two are a replica group
15:53 dbruhn since you have a replica 2, the servers replicate on the order they were entered into the system
15:53 somepoortech ok so I have in fact named these very badly
15:53 dbruhn yeah a little bit
15:54 dbruhn Here is an example of how I name mine
15:54 dbruhn http://fpaste.org/97840/78687113/
15:54 glusterbot Title: #97840 Fedora Project Pastebin (at fpaste.org)
15:55 dbruhn I have a bad habit of not setting my bricks to a directory under the file systems mount point too
15:55 dbruhn so in best practice I should be doing something like /var/testvol/aa/brick
15:56 somepoortech ok, yeah that makes more sense
15:56 dbruhn in this case I know aa and aa are replica's by design
15:56 somepoortech So since I'm asking questions and messing around with this, I'm trying to make a distributed file system for VM storage
15:56 dbruhn granted this naming scheme doesn't scale well if you change your replication scheme, but it works for my use.
15:56 dbruhn ok
15:57 somepoortech I've actually loaded up gluster-server on dom0 in Xen so I can mount an NFS volume on localhost in a Xen cluster
15:57 dbruhn bare metal xen or are you installing xen on a base OS?
15:57 somepoortech bare metal xen
15:58 somepoortech had to grab the RPM's and ignore the citrix repository to install
15:59 dbruhn ok
15:59 dbruhn So whats the question?
15:59 somepoortech Is this a terrible idea :-P
15:59 dbruhn lol, well... depends on what you are worried about
16:00 somepoortech mostly running a gluster server on dom0 for HA across the cluster.  I'm not worried to much about performance, I'm more interested in data safety
16:01 somepoortech I've also turned on cluster.quorum-type: fixed on gluster
16:04 dbruhn Well if you are using replica it's fine, I have seen some instances where people talk about a time out when a brick server goes offline the 42 second time out can cause the vm's the lock the storage and need a restart. I would suggest testing more than anything.
16:05 somepoortech my volume name is 'test' :-) I've also put on network.ping-timeout: 25 after reading JoeJulian's website
16:07 dbruhn I had seen one guy complaining about one of his vm's corrupting, I would suggest beating things up a bit.
16:07 dbruhn Force the system to do some self heals if you can
16:07 sauce joined #gluster
16:08 dbruhn and maybe test a rebalance operation, if you ever expect to expand the cluster
16:10 somepoortech roger I did manage to split-brain it, I don't know if the quorum will help with that
16:11 Mo__ joined #gluster
16:11 somepoortech Thanks for the help, I'm going to go straiten out my bricks now....
16:11 dbruhn I am still a bit hazy on quorum, I believe brick based quorum needs a replica count of 3 to function, server based quorum, I believe takes the whole volume offline?
16:11 dbruhn Maybe someone else can clarify that for me, and for the conversation
16:12 hagarth joined #gluster
16:14 DV joined #gluster
16:17 semiosis purpleidea: there's only three packages in debuntu land: -common, -server, and -client.  the folders are the same.
16:18 semiosis Durzo: the libgfapi stuff is in glusterfs-common
16:18 purpleidea semiosis: got it, thanks.
16:18 purpleidea semiosis: first WIP branch: https://github.com/purpleidea/pu​ppet-gluster/tree/feat/yamldata
16:18 glusterbot Title: purpleidea/puppet-gluster at feat/yamldata · GitHub (at github.com)
16:18 purpleidea testing / issue reporting appreciated (i didn't test it yet)
16:18 plarsen joined #gluster
16:21 Pavid7 joined #gluster
16:23 vpshastry joined #gluster
16:23 rwheeler joined #gluster
16:28 zerick joined #gluster
16:41 Matthaeus joined #gluster
16:49 plarsen joined #gluster
16:53 semiosis :O
17:01 vpshastry1 joined #gluster
17:01 DV joined #gluster
17:03 _dist joined #gluster
17:06 zaitcev joined #gluster
17:10 Slashman joined #gluster
17:17 B21956 joined #gluster
17:18 LoudNoises joined #gluster
17:35 ctria joined #gluster
17:38 vpshastry1 left #gluster
17:46 theron_ joined #gluster
17:47 lpabon joined #gluster
17:57 VerboEse joined #gluster
17:58 jrcresawn joined #gluster
18:01 belwood joined #gluster
18:05 Humble joined #gluster
18:07 dewey joined #gluster
18:20 awaad joined #gluster
18:20 awaad What is the difference between gluster file system & DRBD?
18:24 jbd1 awaad: drbd is basically network-based raid-1.  GlusterFS is a clustering filesystem
18:25 ricky-ticky1 joined #gluster
18:26 awaad jbd1: So, data saved in the gluster are distributed (not mirrored or synced) between multiple hosts. Write?
18:27 jbd1 awaad: it depends on your configuration.  You can have distributed volumes, replicated volumes, and combined distributed-replicated volumes.
18:27 jbd1 awaad: but with drbd, it is only replicated
18:28 awaad jbd1: Can I used Gluster to replicate data on remote hosts connected over WAN?
18:28 jbd1 awaad: yes, GlusterFS has a special geo-replication mode for high-latency connections
18:30 awaad jbd1: I have two remote hosts that I try to sync data between them through a two way sync tool called "unison". Is it possible for Gluster to replace it?
18:30 jbd1 awaad: it is generally recommended to have redundancy locally as well as remotely though, so you would typically see people using geo-replication to sync a local replicated or distributed-replicated volume to a remote equivalent
18:31 jbd1 awaad: last I checked, geo-replication in GlusterFS was designed to be unidirectional (so you have a master and a backup replica), so GlusterFS wouldn't be a perfect replacement for two-way sync
18:33 semiosis that's correct
18:34 * jbd1 just noticed mod_proxy_gluster.  Very interesting
18:37 jbd1 I wonder whether mod_proxy_gluster would be expected to outperform the current apache-serving-FUSE-mount setups many of us have
18:38 semiosis link?
18:38 jbd1 https://forge.gluster.org/mod_proxy_gluster
18:39 glusterbot Title: Apache Module mod_proxy_gluster - Gluster Community Forge (at forge.gluster.org)
18:39 * jbd1 clones it
18:40 dewey joined #gluster
18:41 jbd1 I currently maintain a farm of apache proxy servers which mostly just host static content from GlusterFS (via FUSE).  I find that they tend to be bandwidth-bound on the backend because my glusterfs cluster's bandwidth exceeds my proxy server interface capabilities.  If mod_proxy_gluster were smart enough to request a file only from one brick (I'm not using striping) instead of the current FUSE all-hands-on-deck approach, I could get more out of my proxies
18:42 davinder joined #gluster
18:46 jbd1 FUSE is better for me than NFS because it's better to max an interface on a proxy server than on a glusterFS server
18:50 AaronGr joined #gluster
18:51 ndevos jbd1: mod_proxy_gluster is a libgfapi test from me, I'm still planning to have a go at comparing that module vs fuse
18:52 jag3773 joined #gluster
18:52 jbd1 ndevos: don't worry, I'm not putting it into production today :)
18:52 ndevos jbd1: if the module proves useful and has advantages over the fuse mount, let me know what can get improved and I'll try to have a look at it
18:53 ndevos jbd1: there are RPM packages available too, see https://forge.gluster.org/m​od_proxy_gluster/pages/Home
18:53 glusterbot Title: Apache Module mod_proxy_gluster - Home - Open wiki - Gluster Community Forge (at forge.gluster.org)
18:54 jbd1 ndevos: I'm on ubuntu, so I'll be compiling it locally (here in my lab)
18:55 jbd1 not too difficult: apxs2 -I /usr/include/glusterfs -c mod_proxy_gluster.c
18:57 ndevos jbd1: please share any results you get, I'm all ears - but not today anymore, it's getting late here
18:57 * ndevos goes afk
18:58 jbd1 k
19:08 jbd1 aha, gotta link in gfapi: sudo apxs2 -I /usr/include/glusterfs/ -L /usr/lib/x86_64-linux-gnu/ -l gfapi -ci mod_proxy_gluster.c
19:10 MrAbaddon joined #gluster
19:11 sprachgenerator joined #gluster
19:13 daMaestro joined #gluster
19:16 dbruhn joined #gluster
19:20 [o__o] joined #gluster
19:22 davinder joined #gluster
19:34 dbruhn_ joined #gluster
19:40 Philambdo joined #gluster
19:45 B21956 joined #gluster
19:48 B21956 joined #gluster
19:55 jbd1 well, mod_proxy_gluster didn't work for me.  could be something with gfapi, who knows.
20:14 plarsen joined #gluster
20:20 johnmark joined #gluster
20:21 jag3773 joined #gluster
20:24 glusterbot joined #gluster
20:25 theron joined #gluster
20:43 ctria joined #gluster
20:44 ctria joined #gluster
20:49 badone_ joined #gluster
20:53 tdasilva joined #gluster
20:59 pingitypong joined #gluster
21:05 pingitypong df is telling me my gluster mount point is using 80G but du is saying what's mounted there is only using 5G
21:05 pingitypong Anyone know why that would be?
21:05 y4m4 pingitypong: do you have sparse files? truncated?
21:05 pingitypong df thinks my filesystem is full (80G) even though there are only 5G in files on it.
21:06 dbruhn_ are the bricks mounted in directories below the file system, and are do the file systems actually have 80GB of data in them?
21:06 pingitypong it's ext4
21:06 y4m4 that doesn't answer much :-)
21:06 pingitypong dbruhn_: I'm pretty sure there is not 80G of data there
21:07 y4m4 left #gluster
21:07 dbruhn_ how sure? lol
21:07 y4m4 joined #gluster
21:07 pingitypong what
21:07 pingitypong what's the command to see if it's created as spase or not?
21:07 y4m4 pingitypong: 'ls -l'
21:07 pingitypong I had no idea
21:08 y4m4 pingitypong: 'ls -lhR'
21:09 pingitypong how does ls -lhR show that?
21:09 pingitypong sorry, it's really not obvious to me
21:10 MrAbaddon joined #gluster
21:10 dbruhn_ what is the output of df on your brick servers
21:10 dbruhn_ and what is the output of gluster volume info
21:11 pingitypong # gluster volume info
21:11 pingitypong Volume Name: fileshare
21:11 pingitypong Type: Replicate
21:11 pingitypong Status: Started
21:11 pingitypong Number of Bricks: 2
21:11 pingitypong Transport-type: tcp
21:11 pingitypong Bricks:
21:11 pingitypong Brick1: ls1.launch.it:/data
21:11 dbruhn_ use fpaste.org
21:11 pingitypong Brick2: ls2.launch.it:/data
21:11 pingitypong pretty simple setup
21:11 pingitypong sorry
21:11 dbruhn_ that way you don't get kicked by the bot
21:11 pingitypong :-o
21:12 dbruhn_ no worries, just wanted to make sure you didn't get kicked
21:12 dbruhn_ use fpaste and share the link
21:12 pingitypong http://fpaste.org/97940/13988059/
21:12 glusterbot Title: #97940 Fedora Project Pastebin (at fpaste.org)
21:13 basso joined #gluster
21:13 dbruhn_ ok so gluster is seeing the size of the file system it is on
21:13 dbruhn_ and your bricks look to be on your root file system
21:14 pingitypong correct
21:14 dbruhn_ with contains 76GB of data
21:14 pingitypong root@LaunchServer1:/mnt/files# du -sh
21:14 pingitypong 5.2G.
21:14 dbruhn_ and your gluster mount is showing 76GB
21:14 pingitypong but there is only 5G being used, not 80
21:14 pingitypong y 76 of 80
21:14 dbruhn_ I understand, but it is seeing the size of the file systems it is on, not how much data is in it
21:14 y4m4 pingitypong: ah you created using "/"
21:15 dbruhn_ the brick is just a directory
21:15 dbruhn_ see how
21:15 dbruhn_ /dev/vda                   80G   76G  846M  99% /
21:15 pingitypong y, I have a brick directory, and a volume mounted on it
21:15 dbruhn_ ls1.launch.it:/fileshare   80G   76G  846M  99% /mnt/files
21:15 dbruhn_ are the same usage
21:15 pingitypong that one however /dev/vda was created by Digital Ocean
21:15 y4m4 pingitypong: GlusterFS will get stavfs info from the underlying disk not the directory
21:15 pingitypong ah
21:16 y4m4 pingitypong: you are using a directory which resides on /dev/vda
21:16 y4m4 pingitypong: which also has other data
21:16 y4m4 pingitypong: ideall you should have something like /dev/vdb
21:16 pingitypong should be just our data
21:16 y4m4 pingitypong: which is separated out of OS disk
21:16 pingitypong I can create /dev/vdb on top of the dev/vda that I get with our digital ocean droplet?
21:16 zerodeux left #gluster
21:17 zerodeux joined #gluster
21:18 y4m4 pingitypong: i don't know what 'digital ocean' is but if you wish to have glusterfs volume seperated out of your os disk you should have space available on a different disk
21:18 y4m4 that would be called 'data' disk'
21:18 refrainblue joined #gluster
21:18 y4m4 pingitypong: using os disk for gluster would lead to ENOSPC issues
21:18 pingitypong https://www.digitalocean.com/ <-- pretty awesome cloud hosting
21:18 glusterbot Title: SSD Cloud Server, VPS Server, Simple Cloud Hosting | DigitalOcean (at www.digitalocean.com)
21:18 dbruhn_ Digital Ocean is one flavor of online virtualized hosting service
21:18 dbruhn_ I use it for small stuff, it's cheap
21:19 dbruhn_ but pingitypong, y4m4 is correct you will want to add an additional virtual disk to your servers, and use those disks for gluster only storage
21:19 refrainblue Anyone using Oracle Linux's UEK been able to get glusterfs volume to mount?
21:19 pingitypong That is not to say I am running into ENOSPC issues here?
21:20 dbruhn_ and to be honest from the looks of your file system configuration, you are setting yourself up for a little bit of pain when it comes to /var/log etc
21:20 y4m4 pingitypong: you are going to run or already are "/dev/vda                   80G   76G  846M  99% /"
21:21 dbruhn_ refrainblue, sorry I am not, what kind of issues are you having?
21:21 pingitypong dbruhn_: what do you mean by that (obviously I understand logs grow, and have to be managed/rotated)
21:22 pingitypong my entire /var/log right now is only 600M
21:22 dbruhn_ gluster has keeps it's configuration files at /var/lib/gluster, and if that file system runs out of space it can corrupt your log files. Since you are running with a flat file system for everything, if something grows out of control you could run into problems.
21:22 pingitypong I've seen much worse (with varnish, for example)
21:22 refrainblue dbruhn_: I don't know what the issue is, but when I am using the UEK, I can't mount glusterfs volume.  Using RHCK, I can mount the volume...  so I am pretty sure it has something to do with UEK...
21:22 pingitypong oh I see dbruhn_
21:22 dbruhn_ I have seen gluster when it's having issues generate 20GB of logs in a day without flinching
21:23 y4m4 refrainblue: do you see they ship "fuse" kernel module?
21:23 y4m4 refrainblue: have you tried mounting with "nfs" ?
21:23 refrainblue y4m4: nfs will mount
21:23 pingitypong y4m4: so the question for me is still, how do I find out what this phantom 70G of data is?
21:24 y4m4 refrainblue: correct so it could be 'fuse' issue, paste the logs last 100lines
21:24 y4m4 or refrainblue 'modprobe fuse' and try again.
21:25 y4m4 pingitypong: du -akx / | sort -nr | head -10 (gets you top 10 culprit)
21:25 failshel_ joined #gluster
21:25 refrainblue i'm pretty sure fuse is loaded: lsmod | grep fuse =   fuse                   78015  0
21:26 y4m4 refrainblue: then some other incompatibility - can you look at logs?
21:27 y4m4 refrainblue: and perhps 'fpaste' it ?
21:27 refrainblue what is fpaste?
21:27 refrainblue oh i see its like pastebin
21:28 pingitypong y4m4: isn't df telling me the files in question have to be in /mnt/files, and not just / ?
21:28 y4m4 pingitypong: nope
21:28 dbruhn_ pingitypong, it's mostly likely your OS and all the rest of the packages etc.
21:29 pingitypong 76GB of OS packages ?!?
21:29 refrainblue y4m4: http://fpaste.org/97946/06922139/
21:29 glusterbot Title: #97946 Fedora Project Pastebin (at fpaste.org)
21:29 dbruhn_ pingitypong, did you build the server?
21:30 y4m4 refrainblue: "[2014-04-29 20:51:16.711124] I [fuse-bridge.c:4787:fuse_thread_proc] 0-fuse: unmounting /mnt/gfs" ? did you do this?
21:30 y4m4 refrainblue: manually?
21:30 pingitypong no, I just "spun up" an ubuntu image on digital ocean
21:30 refrainblue i did not
21:30 pingitypong it's a virtual server running 12.04
21:30 y4m4 refrainblue: hmm interesting, doesn't say much but might need debug logs can you open a bug?
21:31 refrainblue y4m4: it just fails to mount when i try to mount
21:31 y4m4 refrainblue: yeah i can see that its silently unmounting
21:31 refrainblue y4m4: i actually did open a bug report about it
21:31 y4m4 refrainblue: after its successfully connects
21:31 dbruhn_ refrainblue, what version of gluster?
21:31 y4m4 refrainblue: then lets get some debug logs 'mount -t glusterfs -olog-level=DEBUG'
21:32 dbruhn_ @ports
21:32 glusterbot dbruhn_: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
21:32 refrainblue y4m4: i've tested it with 3.4.0, 3.4.1, 3.4.2, and 3.5
21:32 dbruhn_ checked that for myself to make sure what I was seeing
21:32 y4m4 refrainblue: this could be just related to Unbrekable Kernel
21:32 sprachgenerator joined #gluster
21:33 refrainblue y4m4: i also have many other machines not using UEK successfully mounting
21:33 y4m4 refrainblue: yeah can there be remote access enabled?
21:33 y4m4 refrainblue: to look at it .
21:33 y4m4 ?
21:33 refrainblue y4m4: actually 2.6.39-400.17.1.el6uek.x86_64 mounts successfully
21:33 y4m4 refrainblue: hmm even more interesting
21:34 y4m4 refrainblue: never used UEK and we don't test with UEK inhouse :-)
21:37 y4m4 refrainblue: can there be more debug logs that i could poke around?
21:37 y4m4 refrainblue: or you can give me remote access.
21:39 refrainblue y4m4: thats a bit complicated, we use RSA tokens for VPNing in, and i dont think i could legally allow you to touch the data anyway lol
21:39 y4m4 refrainblue: ah alright then!
21:40 y4m4 refrainblue: then we work through the bugzilla :-)
21:40 pingitypong I want to thank y4m4 from providing the tip to pipe my sorted du through head -10
21:40 pingitypong which was what I was needed
21:40 refrainblue i would like to be able to get you debug logs though...  i remember there was a debug package for glusterfs
21:40 refrainblue y4m4: i just forgot how to use it
21:40 pingitypong I kept doing it without head -10 which was not terribly useful (even using sort -R)
21:41 pingitypong what I still don't understand is why df keeps telling me that all the space is being used in /mnt/files, when du (correctly) showed me the space was being used by /var/www
21:43 dbruhn_ pingitypong, gluster is reporting the space used and available based off the information for the file system the brick directory is on. /var/www is on / which is the same file system that contains your gluster brick directory
21:44 y4m4 refrainblue: ah you can just do "mount -t glusterfs -olog-level=DEBUG <hostname>:/<volname>"
21:44 * pingitypong slaps head
21:44 y4m4 refrainblue: then grab the log
21:45 pingitypong dbruhn_:  thanks, now I *understand* it, although that seems to make no sense, and generates misleading output such as I was seeing
21:45 refrainblue y4m4: you still want the last 100 lines?
21:45 dbruhn_ pingitypong, it's best practices to use a file system specific to gluster to store gluster data
21:46 pingitypong you mean glusterfs?
21:46 dbruhn_ refrainblue, are you seeing anything in your system logs showing an issue that might correspond?
21:47 refrainblue :q
21:47 refrainblue oops
21:48 dbruhn_ pingitypong, sec
21:48 dbruhn_ pingitypong, http://fpaste.org/97957/80811813/
21:48 glusterbot Title: #97957 Fedora Project Pastebin (at fpaste.org)
21:48 dbruhn_ this is my test system
21:49 dbruhn_ /var/testvol/aa, /var/testvol/ab, and /var/testvol/ac are all their own file systems
21:49 dbruhn_ these are the bricks that I use for my gluster volume
21:49 refrainblue y4m4: http://ur1.ca/h7mr1
21:49 glusterbot Title: #97959 Fedora Project Pastebin (at ur1.ca)
21:49 pingitypong I'll have to ask DigitalOcean how I can set that up, it's obviously a much nice approach
21:50 dbruhn_ here is what my test system looks like with the gluster output
21:50 dbruhn_ http://fpaste.org/97960/98808238/
21:50 glusterbot Title: #97960 Fedora Project Pastebin (at fpaste.org)
21:51 dbruhn_ you will see my gluster volume is mounted in that second fpaste
21:51 dbruhn_ and it is an aggregate of the information provided by the file systems that the bricks are located on
21:53 refrainblue dbruhn_: i am not seeing anything in the system logs relating to gluster
21:53 refrainblue dbruhn_: other than that i updated it to 3.5 earlier today - Apr 29 13:34:13 ltrac2 yum[38156]: Updated: glusterfs-fuse-3.5.0-2.el6.x86_64
21:54 dbruhn_ just making sure, all of your clients and servers are on the same version right?
21:55 dbruhn_ and you said you had clients that were not running the unbreakable kernel that are working?
21:55 refrainblue dbruhn_: the server is 3.4.1 i believe, the clients i've tested ranged from 3.4.0-3.4.2 & 3.5
21:56 dbruhn_ Did the system start out on 3.3.x?
21:56 refrainblue dbruhn_: yes if i boot the exact same machine into RHCK it will mount the gfs volume fine
21:56 refrainblue the server was always on 3.4.x
21:57 refrainblue we actually set it up fairly recently
21:57 dbruhn_ system is in production?
21:58 refrainblue the machine that cannot mount the volume as glusterfs is down at the moment for other reasons, but it was in production
21:58 refrainblue i mounted it as nfs in the meantime
21:59 refrainblue while i suppose not a huge deal, i read that mounting it as glusterfs has performance gains and i just wanted to get it working
21:59 y4m4 refrainblue: this is the issue "[2014-04-29 21:45:17.760365] D [fuse-bridge.c:4683:fuse_thread_proc] 0-glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
21:59 y4m4 "
21:59 y4m4 refrainblue: do you have 'selinux' ? enabled
22:00 refrainblue i do not, it should be disabled...
22:00 refrainblue sestatus: SELinux status:                 disabled
22:01 theron joined #gluster
22:02 y4m4 refrainblue: looks like a fuse module issue
22:02 y4m4 refrainblue: https://bugzilla.redhat.com/show_bug.cgi?id=764033
22:02 glusterbot Bug 764033: medium, low, 3.2.0, csaba, CLOSED NOTABUG, glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
22:02 glusterbot New news from newglusterbugs: [Bug 1092749] Remove dead code in xattrop <https://bugzilla.redhat.co​m/show_bug.cgi?id=1092749>
22:03 refrainblue y4m4: i was reading the same page
22:04 refrainblue y4m4: wow that's from 3 years ago...
22:04 Paul-C joined #gluster
22:04 y4m4 refrainblue: http://supercolony.gluster.org/piperma​il/gluster-users/2013-July/036371.html
22:04 glusterbot Title: [Gluster-users] Mounting replicated volume fails - terminating upon getting ENODEV when reading /dev/fuse (at supercolony.gluster.org)
22:04 y4m4 refrainblue: someone reported same issues with Oracle Linux
22:05 dbruhn_ so, whats worse for you refrainblue the kernel or ifs?
22:05 y4m4 refrainblue: this is basically /dev/fuse is reporting ENODEV, but its hard to know what since we have to see what Oracle did with their fuse
22:06 y4m4 refrainblue: could be a bug they have
22:06 y4m4 refrainblue: and its fixed
22:06 y4m4 as you said it works on a different kernel
22:06 y4m4 refrainblue: what is the kernel version for this node?
22:06 y4m4 and the one it was working?
22:06 y4m4 refrainblue: ^^
22:07 refrainblue y4m4: not working - 3.8.13-26.2.3.el6uek.x86_64
22:07 y4m4 refrainblue: oh really bleeding edge!
22:07 refrainblue y4m4: working uek - 2.6.39-400.17.1.el6uek.x86_64
22:07 dbruhn_ maybe put in a bug report with oracle on that one then
22:09 refrainblue im not really against switching from uek, but our dba didnt want to change things
22:09 dbruhn_ be back in a bit
22:09 dbruhn_ Well now you have a reason to switch
22:09 dbruhn_ lol
22:09 refrainblue i was thinking of doing it secretly...
22:30 edward1 joined #gluster
22:36 masterzen_ joined #gluster
22:37 churnd- joined #gluster
22:38 johnmwilliams__ joined #gluster
22:51 MrAbaddon joined #gluster
22:57 sauce joined #gluster
23:03 theron_ joined #gluster
23:04 MrAbaddon joined #gluster
23:08 awaad joined #gluster
23:12 exedore6 joined #gluster
23:28 exedore6_ joined #gluster
23:31 pingitypong joined #gluster
23:45 pingitypong joined #gluster
23:51 jag3773 joined #gluster
23:58 coredump joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary