Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 jvandewege joined #gluster
00:10 theron joined #gluster
00:12 theron joined #gluster
00:19 theron joined #gluster
00:32 bala joined #gluster
00:37 diegows joined #gluster
00:54 jvandewege_ joined #gluster
00:54 theron joined #gluster
00:58 jonathanpoon joined #gluster
01:04 theron_ joined #gluster
01:15 mjsmith2 joined #gluster
01:19 gildub joined #gluster
01:49 diegows joined #gluster
01:55 bala joined #gluster
01:56 dusmantkp_ joined #gluster
02:00 haomaiwang joined #gluster
02:04 harish joined #gluster
02:16 wcay joined #gluster
02:19 bharata-rao joined #gluster
02:21 theron joined #gluster
02:22 wcay So, I have a mildly complicated gluster question that I am a bit stumped on if anyone is around...
02:30 lalatenduM joined #gluster
02:46 edong23 joined #gluster
03:01 theron joined #gluster
03:01 haomaiwa_ joined #gluster
03:10 bala joined #gluster
03:30 rejy joined #gluster
03:31 gmcwhist_ joined #gluster
03:32 theron joined #gluster
03:48 itisravi joined #gluster
03:50 rastar joined #gluster
03:50 kshlm joined #gluster
03:50 RameshN joined #gluster
03:53 haomaiwang joined #gluster
04:01 nthomas joined #gluster
04:18 theron joined #gluster
04:18 hchiramm__ joined #gluster
04:23 shubhendu_ joined #gluster
04:28 hagarth joined #gluster
04:37 saurabh joined #gluster
04:43 dusmantkp_ joined #gluster
04:43 ppai joined #gluster
04:44 ndarshan joined #gluster
04:51 rjoseph joined #gluster
04:53 prasanthp joined #gluster
04:54 haomaiwang joined #gluster
04:56 spandit joined #gluster
04:56 kdhananjay joined #gluster
05:01 glusterbot New news from newglusterbugs: [Bug 1111020] Unused code changelog_entry_length <https://bugzilla.redhat.co​m/show_bug.cgi?id=1111020>
05:07 haomaiwang joined #gluster
05:15 spajus joined #gluster
05:29 bala joined #gluster
05:30 karnan joined #gluster
05:37 dusmant joined #gluster
05:44 nshaikh joined #gluster
05:47 aravindavk joined #gluster
05:54 bala joined #gluster
05:55 doekia joined #gluster
05:55 hagarth joined #gluster
05:55 rjoseph joined #gluster
05:57 nbalachandran joined #gluster
05:58 davinder15 joined #gluster
06:01 glusterbot New news from newglusterbugs: [Bug 1111030] rebalance : once you stop rebalance, rebalance status says ' failed: Rebalance not started.' <https://bugzilla.redhat.co​m/show_bug.cgi?id=1111030> || [Bug 1111031] CHANGELOG_FILL_HTIME_DIR macro fills buffer without size limits <https://bugzilla.redhat.co​m/show_bug.cgi?id=1111031>
06:01 raghu joined #gluster
06:06 kumar joined #gluster
06:07 lalatenduM joined #gluster
06:07 ramteid joined #gluster
06:12 LebedevRI joined #gluster
06:16 meghanam joined #gluster
06:16 meghanam_ joined #gluster
06:31 glusterbot New news from newglusterbugs: [Bug 1094478] Bad macro in changelog-misc.h <https://bugzilla.redhat.co​m/show_bug.cgi?id=1094478>
06:37 mbukatov joined #gluster
06:38 d-fence joined #gluster
06:39 rjoseph joined #gluster
06:40 ekuric joined #gluster
06:41 aravindavk joined #gluster
06:42 gmcwhist_ joined #gluster
06:42 _polto_ joined #gluster
06:42 hagarth joined #gluster
06:46 d-fence_ joined #gluster
06:52 haomaiwa_ joined #gluster
06:54 suliba joined #gluster
06:56 ctria joined #gluster
06:58 _polto_ joined #gluster
06:59 calum_ joined #gluster
07:03 coredump joined #gluster
07:04 haomai___ joined #gluster
07:08 dusmant joined #gluster
07:09 eseyman joined #gluster
07:10 purpleidea joined #gluster
07:14 [o__o] joined #gluster
07:21 hybrid512 joined #gluster
07:21 overclk_ joined #gluster
07:21 kanagaraj joined #gluster
07:25 juhaj_ joined #gluster
07:25 fraggeln_ joined #gluster
07:25 l0uis_ joined #gluster
07:25 the-me_ joined #gluster
07:27 ProT-0-TypE joined #gluster
07:29 cyberbootje joined #gluster
07:29 romero joined #gluster
07:32 glusterbot New news from newglusterbugs: [Bug 1111060] [SNAPSHOT] : glusterd fails to update file-system type for brick which is present in other node. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1111060>
07:45 Sunghost joined #gluster
07:51 Sunghost @partner @joejulian - question about my problem with 3.5 beta2 and rebalance
07:52 Sunghost rebalance is since yester still running - on brick1 who runs the rebalance cli is the cpu still at 100%
07:53 Sunghost vol1-rebalance log says ->2014-06-19 07:52:28.248800] I [dht-rebalance.c:1800:gf_defrag_status_get] 0-glusterfs: Rebalance is in progress. Time taken is 61530.00 secs
07:53 Sunghost [2014-06-19 07:52:28.248915] I [dht-rebalance.c:1803:gf_defrag_status_get] 0-glusterfs: Files migrated: 0, size: 0, lookups: 199, failures: 0, skipped: 0
07:53 Sunghost and i cant see any progress in rebalance
07:54 Sunghost cli rebalance status says since yesterday scanned:199 failures:0 and skipped:0 no progress too - normal?
08:02 ricky-ti1 joined #gluster
08:22 Nightshader Is the Gluster VFS not available for Samba4?
08:24 _polto_ joined #gluster
08:26 Paul-C joined #gluster
08:28 NuxRo Nightshader: it has been an integral part of samba for a while now, afaik
08:29 Nightshader I just installed samba4 on CentOS 6.5 but the gluster vfs module is not included. Someone mentions 4.1? (http://gluster.org/pipermail/gluste​r-users/2013-September/037270.html)
08:29 glusterbot Title: [Gluster-users] compiling samba vfs module (at gluster.org)
08:31 Nightshader NuxRo
08:31 Nightshader the smbd -b doesnt show the module, even when I installed samba-3.6.9-168.el6_5.x86_64
08:32 partner Sunghost: hmph, it _should_ move something definately. any errors on logs?
08:33 ppai joined #gluster
08:33 NuxRo hm, the samba in el6 might not have it at all, i remember I had to build it myself
08:36 Nightshader Ok, "git clone git://git.samba.org/samba.git samba" and then switch to 4.1-stable?
08:36 partner Sunghost: any entries such as "fixing the layout of" or "migrate data called on"
08:37 NuxRo Nightshader: I guess so, havent done it for v4 yet
08:37 NuxRo are you on EL7?
08:38 Nightshader NuxRo: I'am on EPEL-6.5 (Centos)
08:40 NuxRo aha
08:40 NuxRo well, try to build that then
08:40 Sunghost hello partner -> yesterday was last entry for fixing
08:41 Sunghost same for migrate data
08:41 vimal joined #gluster
08:42 Sunghost since that 99% the samce "rebalance in progress. time taken is ....fles migrated:0, size:0, lookups:199, failures:0, skipped 0
08:43 Norky joined #gluster
08:44 Sunghost see one entry from yesterday with bailing out frame type (glusterfs 3.3) and remote operation failed: transport endpoint is not connected. path: /backup/...
08:44 Sunghost this folows 1 or two times a houre since that with last message in line: "lookup failed"
08:46 Sunghost last entry was for aprox 30min - file is eachtime another <- so he is still rebalancing - but very slow for a nothing to do volume
08:48 Nightshader Another question, yesterday the memory leak in quota was fixed. Are these backported and applied to the 3.5 rpms?
08:50 partner Sunghost: well pretty much sounds like its not going forward if the other end isn't on the loop
08:50 partner oho, new packages available, wuhuu
08:51 Sunghost for info, the initial node was setup with version 3.5 <- perhaps a problem in this version?
08:51 NuxRo partner: where?
08:51 partner oh noes, debian gone missing completely, nevermind
08:53 partner i'm not sure if i should wait for it or start building the stuff myself as other distros seem to have -2 available?
08:54 hagarth joined #gluster
08:54 partner ohwell, its almost long weekend ahead so not in a hurry with that, need to schedule downtime anyways in advance and shut down part of the production
08:54 NuxRo partner: what gluster version are you on?
08:55 Nightshader partner:Where are the new packages and which version? :)
08:56 Sunghost i now want to count all files on the brick directory and get message the structure has to be cleaned ?! - found 122894 files in 68301 folders
08:57 partner i'm running 3.3.2 but waiting for the 3.4.4-2, should be out any second now
08:58 partner at least its out for fedora & co so basically just missing the package for debian/ubuntu
08:58 partner they are available at the download.gluster.org
08:59 NuxRo bug simbiosis, he's the one building the deb pkgs
09:00 partner i know, i'm not in a rush so i will save my pushing for later need :)
09:06 rastar joined #gluster
09:14 deepakcs joined #gluster
09:17 dbouwyn joined #gluster
09:20 vpshastry joined #gluster
09:21 dusmant joined #gluster
09:25 VerboEse joined #gluster
09:30 lalatenduM joined #gluster
09:31 FooBar How do I start a single brick glusterfsd process that is down?
09:32 kdhananjay FooBar: You can execute 'gluster volume start <volume-name> force'
09:34 FooBar volume start: gv0: failed: Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /export/sdd1/brick. Reason : No data available
09:34 VerboEse joined #gluster
09:35 FooBar trying fix from: https://bugzilla.redhat.com/show_bug.cgi?id=991084 now ;)
09:35 glusterbot Bug 991084: high, unspecified, ---, vbellur, NEW , No way to start a failed brick when replaced the location with empty folder
09:39 Slashman joined #gluster
09:39 FooBar yup...seems to work
09:40 Sunghost Hi, i now check the xfs filesystem on brick one and get some messages "unknown bekam data" <- say it something to anybody?
09:46 harish joined #gluster
09:52 Paul-C joined #gluster
10:12 _polto_ joined #gluster
10:12 _polto_ joined #gluster
10:22 nshaikh joined #gluster
10:24 sh_t joined #gluster
10:25 ghenry joined #gluster
10:26 shubhendu_ joined #gluster
10:35 SpComb I'm using gluster and openvswitch on Ubuntu 14.04... gluster blocks init when trying to mount filesystems before openvswitch is set up...
10:36 SpComb how do I tweak the upstart configs to make it work :/
10:42 kkeithley1 joined #gluster
10:53 shubhendu_ joined #gluster
10:55 _polto_ joined #gluster
10:55 dusmant joined #gluster
11:08 spajus joined #gluster
11:12 haomaiwang joined #gluster
11:19 FooBar is there a way to 'limit' the speed/priority that self-healing is running at..
11:19 FooBar whenever I enable the SHD and start healing, load goes up to 80...
11:20 FooBar SpComb: add '_netdev' to the options in /etc/fstab ?
11:21 SpComb does ubuntu mountall not recognize glusterfs as a network filesyste?
11:21 SpComb FooBar: what's _netdev?
11:21 FooBar makes sure the filesystem is only mounted after network is up
11:22 morse joined #gluster
11:22 FooBar SpComb: don't know... don't use ubuntu.... _netdev works for redhat/centos/fedora systems anyway
11:24 ppai joined #gluster
11:27 SpComb strings /sbin/mountall reveals a _netdev and vim syntax-highlights it... no mention of in the fstab or mountall man pages >_>
11:28 hagarth joined #gluster
11:28 Pupeno joined #gluster
11:31 prasanthp joined #gluster
11:31 SpComb ok, found https://bugs.launchpad.net/ubunt​u/+source/mountall/+bug/1103047
11:31 glusterbot Title: Bug #1103047 “mountall causes automatic mounting of gluster shar...” : Bugs : “mountall” package : Ubuntu (at bugs.launchpad.net)
11:31 SpComb there should be a big fat warning about this somewhere..
11:33 Nightshader2 joined #gluster
11:36 julim joined #gluster
11:41 haomaiwang joined #gluster
11:41 prasanthp joined #gluster
11:43 DV joined #gluster
11:47 gildub joined #gluster
11:54 Pupeno semiosis: hello.
11:56 haomai___ joined #gluster
12:00 RameshN left #gluster
12:02 marbu joined #gluster
12:03 glusterbot New news from newglusterbugs: [Bug 1111169] Dist-geo-rep : After upgrade, geo-rep crashes in gf_history_changelog. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1111169>
12:05 Slashman joined #gluster
12:05 vimal joined #gluster
12:16 jcsp joined #gluster
12:23 Pupeno Argh... it's so frustrating that gluster feels so amazingly magical but then I can't mount it at boot time.
12:29 aravindavk joined #gluster
12:29 sroy_ joined #gluster
12:33 partner such issue has existed yeah but seems to be working better nowadays. i used to hack it by adding mount -a to /etc/rc.local as i did not want to tune init scripts
12:37 chirino joined #gluster
12:38 ProT-O-TypE joined #gluster
12:38 gildub joined #gluster
12:39 TvL2386 joined #gluster
12:41 primechuck joined #gluster
12:45 vpshastry joined #gluster
12:46 bene3 joined #gluster
12:48 tdasilva joined #gluster
12:48 Pupeno partner: mount -a wouldn't help me. I'm surprise it helped you.
12:49 partner hmm what's the issue with that?
12:49 Pupeno partner: mount -a mounts every fstab entry that is not marked with noauto.
12:50 Pupeno partner: if I have the entries in fstab without noauto my computer doesn't even boot.
12:50 bennyturns joined #gluster
12:50 partner i'm not sure how that is related, usually people want to get all fstab-defined mounts to be available..
12:50 Nightshader joined #gluster
12:51 Pupeno mount -a doesn't mount my glusterfs volumes, so, adding it won't make a difference.
12:51 halfinhalfout joined #gluster
12:52 partner so umm if you have fstab entry for gluster volume it won't mount if you manually run mount -a ??
12:52 Pupeno partner: no, because it's marked as noauto.
12:53 partner i'm not following anymore you complain it does not mount on boot and yet you have noauto defined..
12:53 Pupeno partner: if I remove noauto, the server doesn't boot due to not being able to mount them.
12:54 partner what was your distribution?
12:54 Pupeno Ubuntu 12.04.
12:54 partner hmm, sorry i don't have that available for testing
12:55 partner i had issues with debian to mount at boot but the boot went through fine even thought it was unable to mount, hence the mount -a "solution" until the issue was fixed
12:55 plarsen joined #gluster
12:57 marbu joined #gluster
12:58 lalatenduM joined #gluster
12:59 brad_mssw joined #gluster
13:01 Pupeno It does surprise me that mounting a localhost volume seems such a rare thing to do.
13:02 edward1 joined #gluster
13:03 glusterbot New news from newglusterbugs: [Bug 1108669] RHEL 5.8 mount fails "cannot open /dev/fuse" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1108669>
13:09 nbalachandran joined #gluster
13:14 haomaiwa_ joined #gluster
13:15 ndarshan joined #gluster
13:17 Nightshader Question: I installed Samba-4.1.8 with the vfs object = acl_xattr, bricks are using XFS, but we get: "set_nt_acl: failed to convert file acl to posix permissions"
13:22 theron joined #gluster
13:23 ghenry joined #gluster
13:27 jcsp1 joined #gluster
13:27 vimal joined #gluster
13:35 mjsmith2 joined #gluster
13:38 vimal joined #gluster
13:40 jcsp joined #gluster
13:49 spajus joined #gluster
13:49 kiwikrisp joined #gluster
13:51 japuzzo joined #gluster
13:52 jcsp1 joined #gluster
13:56 jbrooks joined #gluster
13:59 ctria joined #gluster
14:01 mjsmith2 joined #gluster
14:02 bene3 joined #gluster
14:17 coredump joined #gluster
14:20 jcsp joined #gluster
14:22 dusmant joined #gluster
14:24 dblack joined #gluster
14:24 _polto_ joined #gluster
14:25 wushudoin joined #gluster
14:27 LebedevRI joined #gluster
14:29 jcsp joined #gluster
14:31 kiwikrisp I recently did a rolling upgrade from 3.4 to 3.5 and am having problems with the result. Does anybody have a howto on how to completely remove glusterfs so I can do a clean re-install without reloading the OS?
14:31 dbouwyn joined #gluster
14:33 glusterbot New news from newglusterbugs: [Bug 1093768] Comment typo in gf-history.changelog.c <https://bugzilla.redhat.co​m/show_bug.cgi?id=1093768>
14:35 FooBar kiwikrisp: which OS /
14:35 kiwikrisp FooBar: CentOS 6.5
14:35 jobewan joined #gluster
14:37 kkeithley_ yum -y erase gluster\*; rm -rf /var/lib/glusterd; rm -rf /var/log/glusterfs  should about do it. And erase/newfs your brick volumes
14:40 ctria joined #gluster
14:44 jcsp joined #gluster
14:45 FooBar kiwikrisp: and to be sure ... 'find /usr /var /etc -name '*gluster*''
14:45 kiwikrisp kkeithly_: Thanks. Once I remove the directories I should be able to re-add the /newfs (brick) directories after the reload just by remoing the newfs/.glusterfs folders? Maybe it would be best to remove them with the setfattr commands before I remove glusterfs just to be sure.
14:45 daMaestro joined #gluster
14:46 kiwikrisp FooBar: Good point, thanks.
14:48 jbrooks joined #gluster
14:50 jcsp1 joined #gluster
14:57 glusterbot New news from resolvedglusterbugs: [Bug 1091648] Bad error checking code in feature/gfid-access <https://bugzilla.redhat.co​m/show_bug.cgi?id=1091648>
15:01 deepakcs joined #gluster
15:04 elico joined #gluster
15:04 wcay joined #gluster
15:05 sjm joined #gluster
15:06 haomaiwa_ joined #gluster
15:06 wcay joined #gluster
15:07 haomaiw__ joined #gluster
15:10 wcay I have a question on gluster volumes and testing disaster recovery methods with gluster if anyone has some experience with that
15:13 amesritter joined #gluster
15:13 wcay more specifically after an infrastructure recovery of brick volumes performing a compare of the before and after recovery volume contents
15:14 amesritter I have a question about CPU usage
15:15 ekuric joined #gluster
15:16 amesritter I've just set up a lamp stack with 2 servers using glusterfs and the CPU usage is out of control on both machines. The databases live on a separate server and just the apache data lives on the gluster volume. The CPU pretty much runs at 100% all the time on both machines.
15:16 amesritter I've switched from the gluster client to nfs with no change in the results
15:18 kkeithley_ kiwikrisp: there are xattrs in the brick dir. newfs or rm -rf $path-to-brick. (That's why I like to make my brick on a subdir, then I can just rm -rf it)
15:20 ndk joined #gluster
15:21 kiwikrisp kkeithley_: Thanks. I do as well, it's just a lot of documentation to change the folder name where the data resides so I just clear the xattrs (JoeJulian blog instructions)
15:22 MacWinner joined #gluster
15:22 davinder15 joined #gluster
15:33 glusterbot New news from newglusterbugs: [Bug 1099294] Incorrect error message in /features/changelog/lib/src/gf-history-changelog.c <https://bugzilla.redhat.co​m/show_bug.cgi?id=1099294>
15:35 theron_ joined #gluster
15:43 wcay amesritter, we have a 2 server cluster with 4vCPU and 16GB ram and we can push our CPU utilization to the high 80% running load testing in PHP on wordpress sites. However, as we improved our web rendering speed our CPU utilization didn't change (even after we doubled the throughput). So we see high cpu usage, but it wasn't a limiting factor...however with small files we have do extensive testing on file system throughput and sh
15:43 wcay own the fuse client (which we use for HA purposes) can only push single digits of M/sec and isn't far behind NFS.
15:44 wcay done*
15:45 amesritter so I can just expect the very high CPU usage as part of gluster? Performance seems ok but I'm having trouble with apache kicking offline whenever files are being copied over. IE, via Wordpress updates and whatnot.
15:46 cmtime amesritter: are you running everything on the same 2 servers? Gluster and lamp?
15:48 amesritter Here's the full setup: 2 "LAP" servers using GlusterFS and ZPanel with an external MYSQL server. The gluster replication traffic is on a separate network from everything else.
15:48 amesritter All of the ZPanel data lives on the brick and a load balancer distributes the load from the outside.
15:49 amesritter I'm in the process of migrating sites to it and the services on one or both servers will randomly stop when transferring files and I assume its because of the astronomically high cpu usage.
15:50 cmtime I am not a gluster god but my advice would be be realistic with your work load.  I mean if the web servers are pushing X gluster is having to serve X
15:50 amesritter I have one of the nodes shutdown right now and I can still see the gluster cpu spiking at glusterfsd 73% and glusterfs at 25%
15:51 ctria joined #gluster
15:52 cmtime Few things I have run into is having to tweak the OS a lot.  under redhat my biggest problem is "echo never > /sys/kernel/mm/redhat_tran​sparent_hugepage/enabled"
15:53 wcay dedicated gluster servers
15:55 wcay we have dedicated mysql, dedicated gluster, dedicated content rendering (apache + php), and dedicated load balancing/caching (nginx)
15:56 amesritter we're poor
15:56 semiosis amesritter: what kind of machine is it that gets high cpu?
15:57 amesritter Yeah, it definitely makes more sense to have dedicated storage servers. There's no way the bosses would approve it. I had no idea the gfs service would hit the CPU this hard.
15:57 amesritter Adding more resources may be possible. They're vmware vm's in a vCloud platform running on NetApp storage.
15:58 amesritter The problem is, I only specd the front end servers with 2 cores and 2gb RAM. I've hosted more sites than this on a single server running LAMP with these resources.
15:59 amesritter The database server has 4 cores and 16GB RAM and the resources are sitting at nearly 0 usage.
15:59 cmtime Could you put gluster on the db servers?
16:00 amesritter There's only 1 database server or else I would do that.
16:01 cmtime how much traffic are you trying to server?  1-10Mbps?  100Mbps? More?
16:01 Matthaeus joined #gluster
16:02 amesritter Not sure how to gauge that. They have gb NICs and there's about 30 wordpress sites.
16:03 Matthaeus joined #gluster
16:04 glusterbot New news from newglusterbugs: [Bug 1099683] Silent error from call to realpath in features/changelog/lib/src/gf-history-changelog.c <https://bugzilla.redhat.co​m/show_bug.cgi?id=1099683>
16:04 jag3773 joined #gluster
16:06 cmtime dstat or atop or iftop
16:15 amesritter Ok, I can use dstat. What stats exactly would you need to see?
16:15 amesritter top
16:16 cmtime do you have a high wai
16:17 cmtime To me high would be more than 5
16:17 amesritter mostly zeros but I'm seeing a 6 or 8 pop up here and there
16:18 cmtime are you low on ram?
16:18 cmtime Are the gluster processes using up a lot of the ram?
16:19 amesritter 138 free
16:19 amesritter saw some 28/29 on cpu wait
16:20 cmtime how much in buffers and cached? If it mostly in used then I think you are on the right track
16:21 amesritter ------memory-usage----- ----total-cpu-usage----
16:21 amesritter used  buff  cach  free|usr sys idl wai hiq siq
16:21 amesritter 1369M 20.1M  483M  128M| 25  35  32   3   0   6
16:21 amesritter 1370M 20.1M  483M  128M| 24  33  32   5   0   6
16:21 amesritter 1371M 20.1M  484M  127M| 30  32  27   6   0   5
16:21 amesritter 1371M 20.1M  484M  126M| 25  35  29   6   0   6
16:21 amesritter 1372M 20.1M  484M  125M| 24  34  30   4   0   9
16:21 semiosis please use pastie.org
16:25 amesritter http://pastie.org/9305694#4
16:25 glusterbot Title: #9305694 - Pastie (at pastie.org)
16:25 zaitcev joined #gluster
16:26 cmtime amesritter: I spent weeks on my big gluster setups having to tweak things.  My scale is a little different but I would say that if most of the cpu usage is not your webserver that gluster might be fighting for ram.
16:26 rturk joined #gluster
16:26 amesritter Ok, RAM I can grab from the database server.
16:27 cmtime what distro are you running ?
16:27 amesritter Ubuntu 12.04 LTS
16:32 zerick joined #gluster
16:34 sh_t joined #gluster
16:36 cmtime Try adding some ram see if it helps.  http://www.sysxperts.com/home/announce​/vmdirtyratioandvmdirtybackgroundratio
16:42 Mo_ joined #gluster
16:42 systemonkey joined #gluster
16:45 amesritter Added 2GB RAM per machine and right now it looks better
16:55 jcsp joined #gluster
17:08 jkroon joined #gluster
17:08 jkroon hi guys
17:09 jkroon opening a file for writing and then issueing flock on it, and if the whole cluster isn't available causes issues.
17:09 jcsp_ joined #gluster
17:10 jkroon even after the whole cluster re-joined these flock calls remained blocked.
17:11 ceddybu joined #gluster
17:15 bennyturns jkroon, can you pastebin the commands?  I can try on my cluster
17:16 jkroon bennyturns, exec 5>/file/on/gluster; flock -w2 -x 5
17:17 jkroon 4 nodes, had a switch failure, and after all the instances that got to that just started blocking.
17:17 bennyturns jkroon, kk then bounce a node?
17:17 jkroon bennyturns, kill the switch inbetween... i've got a cron on every node doing that every minute.
17:18 bennyturns I have 4 nodes but I can't kill the switch, /me thinks
17:18 bennyturns so all 4 nodes lose connectivity with each other?
17:19 jkroon that's pretty much what happened.  redundant switches, servers has two NICs @ 1Gbps, with bridge between them, STP with core switches the masters, what happened is that the "root switch" died, and somehow one of the servers got selected as the "root node" in the STP, causing havoc.
17:20 jkroon cluster was essentially dead in the water.
17:20 jkroon that issue has since been sorted but I had to reboot all the gluster *clients* in order to get the flock's killed.
17:20 bennyturns ya no good
17:20 bennyturns seem like the lock got lost when they lost connectivity
17:21 bennyturns I am gonn try with iptables
17:24 jkroon makes sense.
17:26 jkroon problem is however no recovery.
17:30 bennyturns jkroon, lolz I locked myself out, had to bounce nodes :P
17:34 jkroon lol
17:34 jkroon explicit accept for ssh :)
17:37 bennyturns jkroon, I did, when I did iptables -F to flush it is where things got wonky :P
17:38 bennyturns http://fpaste.org/111228/03199485/
17:38 glusterbot Title: #111228 Fedora Project Pastebin (at fpaste.org)
17:39 jkroon bennyturns, yea, always set policy first :)
17:41 jkroon ok, that script isn't what I would use but it'll suffice for this test case.  mine would be a lot more complex wrt state tracking etc ... but that probably won't achieve what we want here anyway
17:41 jkroon the other option you could have used, iptables -I INPUT 1 -s ${peerip} -j DROP
17:41 wcay Has anyone ever created a new gluster volume on top of an existing set of bricks from a previously gluster volume (i.e. for comparing restored bricks to failed bricks in a DR scenario)?
17:43 wcay The only method I found to do so was to use the force command in the create volume statement and then ls -lR through the entire volume (mounted on a remote server) to get gluster to fully recognize all the files and sizes.
17:43 stickyboy joined #gluster
17:44 bennyturns jkroon, ya didn't repro for me this go.  I'll keep testing to see if I can come up with a better scenario
17:50 jkroon bennyturns, I suspect it might be because multiple flock commands got in there one each system (switch outage was around 15 minutes), and I've now added a system-local lock before going to the cluster side.  seen similar things on gluster in such cases, so i'm hoping that will prevent the issue from escalating as badly as it did.
17:51 kiwikrisp So I reloaded my glusters and everything seems to be working fine except the data that was already in the brick folders don't seem to be available through the gluster nfs server. Is there something I'm missing there? Any new files I add through NFS show up. I just need a way for the gluster service to find and start serving the data already in the folder.
17:51 jkroon but of course - I need another outage in order know for sure and I'm very sure my client will be extremely happy with me randomly bouncing switches with bad STP to try and reproduce.
17:57 Ark joined #gluster
18:07 qdk joined #gluster
18:10 mjsmith2 joined #gluster
18:10 fpetersen joined #gluster
18:10 lmickh joined #gluster
18:10 Guest26454 joined #gluster
18:10 jcsp_ joined #gluster
18:10 systemonkey joined #gluster
18:10 Mo_ joined #gluster
18:10 sh_t joined #gluster
18:10 zerick joined #gluster
18:10 zaitcev joined #gluster
18:10 jag3773 joined #gluster
18:10 Matthaeus joined #gluster
18:10 theron_ joined #gluster
18:10 MacWinner joined #gluster
18:10 amesritter joined #gluster
18:10 wcay joined #gluster
18:10 elico joined #gluster
18:10 jbrooks joined #gluster
18:10 jobewan joined #gluster
18:10 LebedevRI joined #gluster
18:10 wushudoin joined #gluster
18:10 coredump joined #gluster
18:10 bene3 joined #gluster
18:10 japuzzo joined #gluster
18:10 kiwikrisp joined #gluster
18:10 ghenry joined #gluster
18:10 edward1 joined #gluster
18:10 brad_mssw joined #gluster
18:10 marbu joined #gluster
18:10 halfinhalfout joined #gluster
18:10 bennyturns joined #gluster
18:10 sroy_ joined #gluster
18:10 Pupeno joined #gluster
18:10 hagarth joined #gluster
18:10 Paul-C joined #gluster
18:10 harish joined #gluster
18:10 VerboEse joined #gluster
18:10 Norky joined #gluster
18:10 romero joined #gluster
18:10 cyberbootje joined #gluster
18:10 the-me joined #gluster
18:10 l0uis joined #gluster
18:10 juhaj_ joined #gluster
18:10 overclk_ joined #gluster
18:10 hybrid512 joined #gluster
18:10 [o__o] joined #gluster
18:10 purpleidea joined #gluster
18:10 suliba joined #gluster
18:10 hchiramm__ joined #gluster
18:10 jvandewege joined #gluster
18:10 huleboer joined #gluster
18:10 cmtime joined #gluster
18:10 gmcwhistler joined #gluster
18:10 rotbeard joined #gluster
18:10 n0de joined #gluster
18:10 XpineX joined #gluster
18:10 jph98 joined #gluster
18:10 sac`away joined #gluster
18:10 prasanth|brb joined #gluster
18:10 eshy joined #gluster
18:10 tty00 joined #gluster
18:10 ThatGraemeGuy joined #gluster
18:10 a2 joined #gluster
18:10 T0aD joined #gluster
18:10 GabrieleV joined #gluster
18:10 fyxim_ joined #gluster
18:10 primusinterpares joined #gluster
18:10 AaronGr joined #gluster
18:10 cfeller joined #gluster
18:10 mtanner_ joined #gluster
18:10 simulx joined #gluster
18:10 marcoceppi joined #gluster
18:10 [ilin] joined #gluster
18:10 ninkotech joined #gluster
18:10 eightyeight joined #gluster
18:10 yosafbridge joined #gluster
18:10 mibby joined #gluster
18:10 jezier_ joined #gluster
18:10 _NiC joined #gluster
18:10 DanF joined #gluster
18:10 and` joined #gluster
18:10 eclectic joined #gluster
18:10 NuxRo joined #gluster
18:10 coredumb joined #gluster
18:10 abyss_ joined #gluster
18:10 txmoose joined #gluster
18:10 flowouffff joined #gluster
18:10 muhh joined #gluster
18:10 sman joined #gluster
18:10 Kins joined #gluster
18:10 samppah joined #gluster
18:10 churnd joined #gluster
18:10 lkoranda joined #gluster
18:10 Gugge joined #gluster
18:10 neoice joined #gluster
18:10 crashmag joined #gluster
18:10 sspinner joined #gluster
18:10 efries joined #gluster
18:10 FooBar joined #gluster
18:10 troj_ joined #gluster
18:10 tomased joined #gluster
18:10 tg2 joined #gluster
18:10 social joined #gluster
18:10 ackjewt joined #gluster
18:10 vincent_vdk joined #gluster
18:10 Slasheri joined #gluster
18:10 klaas joined #gluster
18:10 delhage joined #gluster
18:10 hflai joined #gluster
18:10 radez_g0n3 joined #gluster
18:10 sulky joined #gluster
18:10 Bardack joined #gluster
18:10 al joined #gluster
18:10 velladecin joined #gluster
18:10 Rydekull joined #gluster
18:10 saltsa joined #gluster
18:10 msvbhat joined #gluster
18:10 tru_tru joined #gluster
18:10 JustinClift joined #gluster
18:10 xavih joined #gluster
18:10 JonathanD joined #gluster
18:10 sage__ joined #gluster
18:10 gts__ joined #gluster
18:10 atrius joined #gluster
18:10 twx joined #gluster
18:10 mwoodson joined #gluster
18:10 kke joined #gluster
18:10 Peanut joined #gluster
18:10 codex joined #gluster
18:10 georgeh|workstat joined #gluster
18:10 tjikkun joined #gluster
18:10 brad[] joined #gluster
18:10 NCommander joined #gluster
18:10 Alex joined #gluster
18:10 capri joined #gluster
18:10 siel joined #gluster
18:10 asku joined #gluster
18:10 swebb joined #gluster
18:10 monotek joined #gluster
18:10 edwardm61 joined #gluster
18:10 uebera|| joined #gluster
18:10 ccha3 joined #gluster
18:10 k3rmat joined #gluster
18:10 rturk-away joined #gluster
18:10 verdurin joined #gluster
18:10 lanning joined #gluster
18:10 firemanxbr joined #gluster
18:10 ernetas joined #gluster
18:10 ninkotech_ joined #gluster
18:10 tziOm joined #gluster
18:10 Thilam joined #gluster
18:10 bfoster joined #gluster
18:10 bchilds joined #gluster
18:10 mkzero joined #gluster
18:10 pdrakeweb joined #gluster
18:10 oxidane joined #gluster
18:10 ron-slc joined #gluster
18:10 Ramereth joined #gluster
18:10 atrius` joined #gluster
18:10 fsimonce joined #gluster
18:10 SpComb joined #gluster
18:10 JordanHackworth joined #gluster
18:10 Andreas-IPO joined #gluster
18:10 ultrabizweb joined #gluster
18:10 osiekhan1 joined #gluster
18:10 lezo joined #gluster
18:10 samkottler joined #gluster
18:10 decimoe joined #gluster
18:10 Georgyo joined #gluster
18:10 masterzen joined #gluster
18:10 _jmp_ joined #gluster
18:10 mjrosenb joined #gluster
18:10 pasqd joined #gluster
18:10 foster joined #gluster
18:10 johnmwilliams__ joined #gluster
18:14 doekia joined #gluster
18:16 doekia joined #gluster
18:16 mjsmith2 joined #gluster
18:16 fpetersen joined #gluster
18:16 lmickh joined #gluster
18:16 Guest26454 joined #gluster
18:16 jcsp_ joined #gluster
18:16 systemonkey joined #gluster
18:16 Mo_ joined #gluster
18:16 sh_t joined #gluster
18:16 zerick joined #gluster
18:16 zaitcev joined #gluster
18:16 jag3773 joined #gluster
18:16 Matthaeus joined #gluster
18:16 theron_ joined #gluster
18:16 MacWinner joined #gluster
18:16 amesritter joined #gluster
18:16 elico joined #gluster
18:16 jbrooks joined #gluster
18:16 jobewan joined #gluster
18:16 LebedevRI joined #gluster
18:16 wushudoin joined #gluster
18:16 coredump joined #gluster
18:16 bene3 joined #gluster
18:16 japuzzo joined #gluster
18:16 kiwikrisp joined #gluster
18:16 ghenry joined #gluster
18:16 edward1 joined #gluster
18:16 brad_mssw joined #gluster
18:16 marbu joined #gluster
18:16 halfinhalfout joined #gluster
18:16 sroy_ joined #gluster
18:16 Pupeno joined #gluster
18:16 hagarth joined #gluster
18:16 Paul-C joined #gluster
18:16 harish joined #gluster
18:16 VerboEse joined #gluster
18:16 Norky joined #gluster
18:16 romero joined #gluster
18:16 cyberbootje joined #gluster
18:16 the-me joined #gluster
18:16 l0uis joined #gluster
18:16 juhaj_ joined #gluster
18:16 overclk_ joined #gluster
18:16 hybrid512 joined #gluster
18:16 [o__o] joined #gluster
18:16 purpleidea joined #gluster
18:16 suliba joined #gluster
18:16 hchiramm__ joined #gluster
18:16 jvandewege joined #gluster
18:16 huleboer joined #gluster
18:16 cmtime joined #gluster
18:16 gmcwhistler joined #gluster
18:16 rotbeard joined #gluster
18:16 n0de joined #gluster
18:16 XpineX joined #gluster
18:16 jph98 joined #gluster
18:16 sac`away joined #gluster
18:16 prasanth|brb joined #gluster
18:16 eshy joined #gluster
18:16 tty00 joined #gluster
18:16 ThatGraemeGuy joined #gluster
18:16 a2 joined #gluster
18:16 T0aD joined #gluster
18:16 GabrieleV joined #gluster
18:16 fyxim_ joined #gluster
18:16 AaronGr joined #gluster
18:16 cfeller joined #gluster
18:16 mtanner_ joined #gluster
18:16 simulx joined #gluster
18:16 marcoceppi joined #gluster
18:16 [ilin] joined #gluster
18:16 ninkotech joined #gluster
18:16 eightyeight joined #gluster
18:16 yosafbridge joined #gluster
18:16 mibby joined #gluster
18:16 jezier_ joined #gluster
18:16 _NiC joined #gluster
18:16 DanF joined #gluster
18:16 and` joined #gluster
18:16 eclectic joined #gluster
18:16 NuxRo joined #gluster
18:16 coredumb joined #gluster
18:16 abyss_ joined #gluster
18:16 txmoose joined #gluster
18:16 flowouffff joined #gluster
18:16 muhh joined #gluster
18:16 sman joined #gluster
18:16 Kins joined #gluster
18:16 samppah joined #gluster
18:16 churnd joined #gluster
18:16 lkoranda joined #gluster
18:16 Gugge joined #gluster
18:16 neoice joined #gluster
18:16 crashmag joined #gluster
18:16 sspinner joined #gluster
18:16 efries joined #gluster
18:16 FooBar joined #gluster
18:16 troj_ joined #gluster
18:16 tomased joined #gluster
18:16 tg2 joined #gluster
18:16 social joined #gluster
18:16 ackjewt joined #gluster
18:16 vincent_vdk joined #gluster
18:16 Slasheri joined #gluster
18:16 klaas joined #gluster
18:16 delhage joined #gluster
18:16 hflai joined #gluster
18:16 radez_g0n3 joined #gluster
18:16 sulky joined #gluster
18:16 Bardack joined #gluster
18:16 al joined #gluster
18:16 velladecin joined #gluster
18:16 Rydekull joined #gluster
18:16 saltsa joined #gluster
18:16 msvbhat joined #gluster
18:16 tru_tru joined #gluster
18:16 JustinClift joined #gluster
18:16 xavih joined #gluster
18:16 JonathanD joined #gluster
18:16 sage__ joined #gluster
18:16 gts__ joined #gluster
18:16 atrius joined #gluster
18:16 twx joined #gluster
18:16 mwoodson joined #gluster
18:16 kke joined #gluster
18:16 Peanut joined #gluster
18:16 codex joined #gluster
18:16 georgeh|workstat joined #gluster
18:16 tjikkun joined #gluster
18:16 brad[] joined #gluster
18:16 NCommander joined #gluster
18:16 Alex joined #gluster
18:16 capri joined #gluster
18:16 siel joined #gluster
18:16 asku joined #gluster
18:16 swebb joined #gluster
18:16 monotek joined #gluster
18:16 edwardm61 joined #gluster
18:16 uebera|| joined #gluster
18:16 ccha3 joined #gluster
18:16 k3rmat joined #gluster
18:16 rturk-away joined #gluster
18:16 verdurin joined #gluster
18:16 lanning joined #gluster
18:16 firemanxbr joined #gluster
18:16 ernetas joined #gluster
18:16 ninkotech_ joined #gluster
18:16 tziOm joined #gluster
18:16 Thilam joined #gluster
18:16 bfoster joined #gluster
18:16 mkzero joined #gluster
18:16 pdrakeweb joined #gluster
18:16 oxidane joined #gluster
18:16 ron-slc joined #gluster
18:16 Ramereth joined #gluster
18:16 atrius` joined #gluster
18:16 fsimonce joined #gluster
18:16 SpComb joined #gluster
18:16 JordanHackworth joined #gluster
18:16 Andreas-IPO joined #gluster
18:16 ultrabizweb joined #gluster
18:16 osiekhan1 joined #gluster
18:16 lezo joined #gluster
18:16 samkottler joined #gluster
18:16 decimoe joined #gluster
18:16 Georgyo joined #gluster
18:16 masterzen joined #gluster
18:16 _jmp_ joined #gluster
18:16 mjrosenb joined #gluster
18:16 pasqd joined #gluster
18:16 foster joined #gluster
18:16 johnmwilliams__ joined #gluster
18:17 jiffe98 joined #gluster
18:18 rturk joined #gluster
18:19 theron joined #gluster
18:20 AbrekUS joined #gluster
18:21 AbrekUS can someone help me with 3.5 geo-replication setup?
18:22 bennyturns joined #gluster
18:23 sh_t joined #gluster
18:23 bennyturns joined #gluster
18:24 AbrekUS I'm able to ssh to the remote server via SSH:
18:24 AbrekUS # ssh gluster.example.com
18:24 AbrekUS Last login: Thu Jun 19 18:17:46 2014 from ...
18:24 AbrekUS [root@ip-X-X-X-X ~]#
18:24 AbrekUS but "gluster volume geo-replication volume1 gluster.example.com::backup create push-pem" fails and log file has:
18:24 AbrekUS 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
18:25 Matthaeus What happens when you dig gluster.example.com?
18:25 giannello joined #gluster
18:26 AbrekUS it resolves to IP of the server
18:26 AbrekUS i.e. DNS is working fine and I'm able to SSH to the remote server
18:26 Matthaeus Just an A record, or do you also have an AAAA record?
18:26 AbrekUS just A record
18:27 deeville joined #gluster
18:27 primusinterpares joined #gluster
18:28 AbrekUS I found discussion about Gluster being confused about socket setting (IPv4 vs IPv6) but not sure whether it is related with my issue
18:28 deeville I'm testing a 2-node replicated gluster set up by using it as shared storage for a 30-node compute cluster. The native gluster fuse client is used to mount the gluster volume via node 2. How does the load-balancing work?
18:28 AbrekUS I've checked all Gluster processes running on my server and all network related FDs are IPv4 FDs
18:29 deeville If node 2 has 100% load, will node 1 help?
18:29 Matthaeus AbrekUS: You can specify the transport mode (tcp vs ib, I think) on the command line.  Try that and see if it helps.
18:30 AbrekUS I've tried that but can't get right syntax
18:30 AbrekUS ...  create push-pem option transport.address-family=socket
18:30 AbrekUS Command type not found while handling geo-replication options
18:31 AbrekUS ... create push-pem transport.address-family=socket
18:31 AbrekUS Command type not found while handling geo-replication options
18:33 daMaestro joined #gluster
18:33 tdasilva joined #gluster
18:34 morse joined #gluster
18:34 spajus joined #gluster
18:34 sadbox joined #gluster
18:34 dblack joined #gluster
18:34 Intensity joined #gluster
18:39 halfinhalfout re native client & failure of a gluster server. any way to prevent clients from reading from/writing to a gluster replica server while it's healing?
18:40 semiosis deeville: *if* you're replicating between the two servers, then writes go to both, and reads will come from either one.  gluster tries to be smart about balancing the read workload between the replicas.  i dont know the details
18:40 semiosis but am pretty sure it chooses which replica to read from on a per-file basis
18:40 semiosis when the file is opened
18:41 semiosis halfinhalfout: not that I know of, however, if a file needs to be healed then it will be healed on-demand when it is accessed
18:41 semiosis you shouldn't need to quarantine a server
18:42 semiosis *when it is accessed before the self-heal daemon gets to it
18:42 deeville semiosis, thanks for the reply. I didn't know that writes go to both, that's why it's a bit slow. But it makes sense in a replicated scenario. For reads though, I find the load goes as high as 80% on one node, but the load on the other node is still pretty much 0%
18:45 halfinhalfout semiosis: thx. I ask 'cause I had a long running copy process fail on some files during a gluster replica server reboot. the fail happened when the rebooting server came back up. I think 'cause the copy tried to write to a dir on the rebooted gluster server that didn't yet exist, 'cause heal hadn't finished
18:50 giannello joined #gluster
18:57 AbrekUS does anyone have Gluster 3.5 with geo-replication setup?
18:59 halfinhalfout AbrekUS: I've got it setup in a staging env.
19:00 AbrekUS halfinhalfout: did you have "transport.address-family" error?
19:01 [o__o] joined #gluster
19:02 bchilds joined #gluster
19:02 AbrekUS halfinhalfout: I'm able to SSH to remote Gluster server, but geo-replication "create" command fails with "gluster.example.com not reachable" and log file has "transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options"
19:02 diegows joined #gluster
19:03 bchilds I have 2 gluster volumes on 2 clusters.  one shows a full hostname+domain in the trusted.glusterfs.pathinfo xattr, the other custer only shows hostname.  what would cause this?
19:04 semiosis hi brad
19:04 brad_mssw left #gluster
19:05 bchilds hey semiosis
19:05 semiosis bchilds: my guess would be the server names used to probe peers & create the volume... are the hostnames the same as reported by gluster volume info
19:05 semiosis ?
19:05 bchilds gluster volume info shows the full host+domain
19:06 semiosis hmmm
19:06 bchilds Brick6: host12-rack07.scale.openstack.enginee​ring.redhat.com:/mnt/brick1/HadoopVol
19:06 bchilds the xattr has:
19:06 bchilds trusted.glusterfs.pathinfo="​(<DISTRIBUTE:HadoopVol-dht> (<REPLICATE:HadoopVol-replicate-2> <POSIX(/mnt/brick1/HadoopVol):host12-rack07:/​mnt/brick1/HadoopVol/user/tom/in-dir/words5> <POSIX(/mnt/brick1/HadoopVol):host10-rack07:/m​nt/brick1/HadoopVol/user/tom/in-dir/words5>))"
19:06 semiosis are glusterfs versions the same on these two clusters?
19:06 bchilds but on my other storage cluster the domains are in the xattr
19:07 bchilds no looks like 3.4 vs 3.6
19:07 semiosis a ha
19:07 halfinhalfout AbrekUS: yes, I've seen the same when trying to create geo-rep session
19:07 bchilds is there a change around this area in 3.4 vs 3.6?
19:08 semiosis my next guess would be to check /etc/hosts, although i'm not aware of any time gluster does a reverse lookup or anything else that could change how a host appears
19:08 AbrekUS halfinhalfout: and how did you fix this issue?
19:09 semiosis bchilds: i'll take a minute to check in git, but dont know anything off the top of my head
19:09 haomaiwa_ joined #gluster
19:10 halfinhalfout for me, fixed by doing 1) cp /var/lib/glusterd/geo-replication/secret.pem ~/.ssh/id_rsa 2) *not* restricting remote command execution in the secret.pem.pub that I put in /root/.ssh/authorized_keys on the slave
19:13 halfinhalfout I think there are bugs in gverify.sh, which is called as part of the geo-rep create process, and those steps work around those bugs ...
19:13 halfinhalfout but that's just my theory … haven't looked closely
19:15 _dist joined #gluster
19:17 halfinhalfout AbrekUS: if you can get "gverify.sh <volume_name> <slave_fqdn> <slave_volume_name>" to work, the create geo-rep session will prolly work
19:18 tjikkun_ joined #gluster
19:25 AbrekUS halfinhalfout: thank you
19:27 semiosis bchilds: idk what to look for.  nothing jumped out at me when i grepped for pathinfo
19:27 bchilds ok i’ll start grepping around some more
19:27 bchilds thanks for looking!
19:27 halfinhalfout AbrekUS: np
19:28 semiosis bchilds: well, one thing... https://github.com/gluster/glusterfs/blob/mast​er/xlators/cluster/dht/src/dht-common.c#L1729
19:28 glusterbot Title: glusterfs/xlators/cluster/dht/src/dht-common.c at master · gluster/glusterfs · GitHub (at github.com)
19:28 n0de Hey guys, my Gluster (v 3.4) started acting up today. Here is the error I am seeing in the log
19:28 n0de [2014-06-19 19:27:41.012642] E [afr-self-heal-common.c:2212:​afr_self_heal_completion_cbk] 0-th-storage-replicate-11: background  entry self-heal failed on /
19:29 n0de I do have a rebalance operation running for a few months now
19:29 _dist FYI, I closed the bug I created yesterday. The issue I had was related to my ext4 (for some reason) not being able to store gluster xattrs. I have kept it, and will test later to figure out how that's even possible.
19:30 glusterbot New news from resolvedglusterbugs: [Bug 1110914] Missing xattr data after adding replica brick <https://bugzilla.redhat.co​m/show_bug.cgi?id=1110914>
19:32 cmtime I have a crazy split brain I need to solve but I am not sure how to solve it safely since this is live.  gf-osn10 lost the array and we added it back in and it re-replicated from gf-osn09 and I ended up here.  Healed-failed is empty right now but I probably need to trigger a new heal.  Any advice on how to deal with this?  http://fpaste.org/111266/04451140/
19:32 glusterbot Title: #111266 Fedora Project Pastebin (at fpaste.org)
19:37 JoeJulian cmtime: "gluster volume heal gf-osn full" should at least resolve those gfids.
19:39 cmtime It has not
19:39 cmtime I have done it once and they seem to persist
19:40 cmtime But I can start it again over the weekend.  I just had to restart all the nodes because my 12 server's glusterd was not working 100% right causing slow performance on the whole setup.
19:41 JoeJulian cmtime:  Then you'll probably want to resolve those gfids to filenames on the bricks using the ,,(gfid resolver)
19:41 glusterbot cmtime: https://gist.github.com/4392640
19:41 JoeJulian glusterbot: lag much?
19:42 cmtime that will be fun with so many files lol
19:50 bennyturns joined #gluster
19:51 Matthaeus joined #gluster
20:02 ctria joined #gluster
20:33 halfinhalfout1 joined #gluster
20:37 halfinhalfout joined #gluster
20:38 sjm joined #gluster
20:42 halfinhalfout advice on troubleshoot *distributed* part of geo-rep?
20:53 deeville joined #gluster
20:57 giannello joined #gluster
21:16 giannello joined #gluster
21:27 jason__ joined #gluster
21:30 ghenry joined #gluster
21:34 Ark joined #gluster
21:44 calum_ joined #gluster
21:50 theron joined #gluster
21:58 giannello joined #gluster
22:18 bchilds semiosis : re my earlier issue.  turns out its an issue with IPV4 vs IPV6.  with IPV4 the pathinfo xattr has full host+domain, with IPV4+IPV6 pathinfo has host only
22:19 giannello joined #gluster
22:19 bchilds the cluster isn’t actually usuing IPV6 its just enabled
22:30 semiosis wow!
22:30 giannello joined #gluster
22:42 fidevo joined #gluster
22:45 elico JoeJulian: how was it or how is it with the couplt PB storage?
22:53 n0de Hey guys, my Gluster (v 3.4) started acting up today. Here is the error I am seeing in the log
22:53 n0de [2014-06-19 19:27:41.012642] E [afr-self-heal-common.c:2212:​afr_self_heal_completion_cbk] 0-th-storage-replicate-11: background  entry self-heal failed on /
22:54 n0de I do have a rebalance operation running for a few months now
23:03 ceddybu joined #gluster
23:05 n0de will restarting gluster on one of the nodes I suspect is causing the trouble break my rebalance?
23:06 ceddybu1 joined #gluster
23:14 bennyturns joined #gluster
23:28 JoeJulian n0de: rebalance is broken in anything prior to the most recently patched 3.4.4.
23:39 n0de JoeJulian: So at this point there is no reason to let it continue to run?
23:39 n0de I should just upgrade Gluster to 3.4.4 and start fresh with the rebalance?
23:41 gildub joined #gluster
23:44 Ark joined #gluster
23:53 elico joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary