Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 sputnik13 joined #gluster
00:33 kr0w left #gluster
00:49 sputnik13 joined #gluster
01:05 sputnik13 joined #gluster
01:09 gildub joined #gluster
01:14 cyberbootje joined #gluster
01:19 soumya_ joined #gluster
01:23 topshare joined #gluster
01:36 bala joined #gluster
01:44 sputnik13 joined #gluster
01:54 topshare joined #gluster
01:54 haomaiw__ joined #gluster
01:56 julim joined #gluster
02:00 harish joined #gluster
02:07 bala joined #gluster
02:10 flu_ joined #gluster
02:10 _Bryan_ joined #gluster
02:15 David_H__ joined #gluster
02:18 calisto joined #gluster
02:20 joevartuli joined #gluster
02:22 joevartuli left #gluster
02:39 sputnik13 joined #gluster
02:40 topshare joined #gluster
02:53 hagarth joined #gluster
03:08 atalur joined #gluster
03:11 sputnik13 joined #gluster
03:18 David_H_Smith joined #gluster
03:19 David_H_Smith joined #gluster
03:22 plarsen joined #gluster
03:22 David_H_Smith joined #gluster
03:24 haomaiwa_ joined #gluster
03:40 rejy joined #gluster
03:41 rmc_ joined #gluster
03:50 meghanam joined #gluster
03:50 meghanam_ joined #gluster
03:52 David_H_Smith joined #gluster
03:56 itisravi joined #gluster
04:01 RameshN joined #gluster
04:07 nishanth joined #gluster
04:16 spandit joined #gluster
04:17 atalur joined #gluster
04:21 shubhendu joined #gluster
04:25 ndarshan joined #gluster
04:26 nbalachandran joined #gluster
04:30 kanagaraj joined #gluster
04:31 rafi1 joined #gluster
04:31 Rafi_kc joined #gluster
04:32 atinmu joined #gluster
04:32 anoopcs joined #gluster
04:33 ppai joined #gluster
04:38 SOLDIERz joined #gluster
04:46 meghanam joined #gluster
04:46 meghanam_ joined #gluster
04:53 rejy joined #gluster
04:53 David_H_Smith joined #gluster
04:53 lalatenduM joined #gluster
05:02 kumar joined #gluster
05:08 aravindavk joined #gluster
05:12 dusmant joined #gluster
05:16 pp joined #gluster
05:24 aravinda_ joined #gluster
05:25 karnan joined #gluster
05:25 sputnik13 joined #gluster
05:31 flu__ joined #gluster
05:39 saurabh joined #gluster
05:40 sahina_ joined #gluster
05:45 kshlm joined #gluster
05:47 sputnik13 joined #gluster
05:47 ramteid joined #gluster
05:52 sputnik13 joined #gluster
05:54 David_H_Smith joined #gluster
06:01 jiffin joined #gluster
06:02 coredumb hello
06:02 glusterbot coredumb: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:02 coredumb hi was wondering if anyone had tried to host Git repos on glusterfs
06:02 coredumb if there was any impact in performances
06:03 coredumb on*
06:05 glusterbot New news from newglusterbugs: [Bug 1165010] Regression TestFrameWork : Starting a process fails with "Port already in use" error in our regression test framework <https://bugzilla.redhat.com/show_bug.cgi?id=1165010>
06:06 hagarth joined #gluster
06:07 sputnik1_ joined #gluster
06:12 sputnik13 joined #gluster
06:19 topshare joined #gluster
06:25 topshare joined #gluster
06:28 saurabh joined #gluster
06:30 SOLDIERz joined #gluster
06:32 SOLDIERz__ joined #gluster
06:35 rjoseph joined #gluster
06:41 LebedevRI joined #gluster
06:50 dusmant joined #gluster
06:51 ctria joined #gluster
06:53 shubhendu joined #gluster
06:55 David_H_Smith joined #gluster
07:00 nshaikh joined #gluster
07:05 glusterbot New news from newglusterbugs: [Bug 1165021] gstatus: When a volume is stopped, status should be shown as stopped instead of unhealthy <https://bugzilla.redhat.com/show_bug.cgi?id=1165021> || [Bug 1108448] selinux alerts starting glusterd in f20 <https://bugzilla.redhat.com/show_bug.cgi?id=1108448>
07:06 dusmant joined #gluster
07:08 Paul-C joined #gluster
07:08 ricky-ticky joined #gluster
07:10 sputnik13 joined #gluster
07:10 dusmant joined #gluster
07:13 Debloper joined #gluster
07:18 ekuric joined #gluster
07:19 rjoseph joined #gluster
07:21 SOLDIERz__ joined #gluster
07:29 Fen2 joined #gluster
07:39 rjoseph joined #gluster
07:39 SOLDIERz__ joined #gluster
07:39 overclk joined #gluster
07:44 topshare joined #gluster
07:47 Paul-C left #gluster
07:47 dusmant joined #gluster
07:48 sputnik13 joined #gluster
07:56 David_H_Smith joined #gluster
07:58 mator_ joined #gluster
08:12 hybrid512 joined #gluster
08:27 fyxim_ joined #gluster
08:27 samkottler joined #gluster
08:33 glusterbot New news from resolvedglusterbugs: [Bug 1164768] Write strings to a file by O_APPEND mode (echo "strings" >> /mountpoint/file.txt) is abnormal <https://bugzilla.redhat.com/show_bug.cgi?id=1164768>
08:35 cultavix joined #gluster
08:36 fsimonce joined #gluster
08:42 nishanth joined #gluster
08:47 spandit joined #gluster
08:49 mbukatov joined #gluster
08:50 dusmant joined #gluster
08:50 shubhendu joined #gluster
08:51 ricky-ticky joined #gluster
08:53 ndarshan joined #gluster
08:53 sahina_ joined #gluster
08:54 Slashman joined #gluster
08:57 David_H_Smith joined #gluster
08:58 bala joined #gluster
09:03 DV joined #gluster
09:04 overclk hagarth, ping
09:04 glusterbot overclk: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
09:05 overclk hagarth, ping. Any plans to send out hangout invite regarding BitRot?
09:07 hagarth overclk: waiting on davemc, let us check with him when he's online later.
09:07 T0aD joined #gluster
09:07 MrAbaddon joined #gluster
09:09 hybrid512 joined #gluster
09:10 hagarth overclk: might be good to have a tentative target for this thursday, nevertheless
09:13 Pupeno joined #gluster
09:15 flu__ joined #gluster
09:16 flu_ joined #gluster
09:16 rjoseph joined #gluster
09:21 topshare joined #gluster
09:27 harish joined #gluster
09:29 DV joined #gluster
09:30 [Enrico] joined #gluster
09:30 deepakcs joined #gluster
09:43 ilbot3 joined #gluster
09:43 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.org/p/GlusterFS-3.6-test-doc
09:44 shubhendu joined #gluster
09:46 spandit joined #gluster
09:47 nishanth joined #gluster
09:48 ndarshan joined #gluster
09:48 overclk hagarth, thanks! makes sense..
09:50 Inflatablewoman joined #gluster
09:50 bala joined #gluster
09:55 sahina_ joined #gluster
09:55 dusmant joined #gluster
09:55 rjoseph joined #gluster
09:57 David_H_Smith joined #gluster
10:01 Debloper joined #gluster
10:02 ira joined #gluster
10:03 soumya_ joined #gluster
10:12 topshare joined #gluster
10:22 calisto joined #gluster
10:36 glusterbot New news from newglusterbugs: [Bug 1161893] volume no longer available after update to 3.6.1 <https://bugzilla.redhat.com/show_bug.cgi?id=1161893>
10:49 rjoseph joined #gluster
10:50 hagarth joined #gluster
10:58 David_H_Smith joined #gluster
11:06 glusterbot New news from newglusterbugs: [Bug 1158746] client process will hang if server is started to send the request before completing connection establishment. <https://bugzilla.redhat.com/show_bug.cgi?id=1158746>
11:13 flu__ joined #gluster
11:14 flu_ joined #gluster
11:15 Inflatablewoman joined #gluster
11:17 smohan joined #gluster
11:24 calisto joined #gluster
11:32 saurabh joined #gluster
11:32 _br_ joined #gluster
11:38 kkeithley1 joined #gluster
11:47 edward1 joined #gluster
11:52 ndevos REMINDER: Gluster Bug Triage meeting starting in 8 minutes on #gluster-meeting
11:58 meghanam joined #gluster
11:58 meghanam_ joined #gluster
11:59 David_H_Smith joined #gluster
12:01 ctrianta joined #gluster
12:02 haomaiwa_ joined #gluster
12:05 jdarcy joined #gluster
12:06 glusterbot New news from newglusterbugs: [Bug 1164523] openat syscall fails on glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1164523>
12:08 ppai joined #gluster
12:09 hagarth1 joined #gluster
12:16 stickyboy joined #gluster
12:16 stickyboy joined #gluster
12:17 kanagaraj joined #gluster
12:20 bene2 joined #gluster
12:20 itisravi_ joined #gluster
12:22 RameshN joined #gluster
12:25 diegows joined #gluster
12:29 [Enrico] joined #gluster
12:32 nbalachandran joined #gluster
12:34 glusterbot New news from resolvedglusterbugs: [Bug 1160710] libgfapi: use versioned symbols in libgfapi.so for compatibility <https://bugzilla.redhat.com/show_bug.cgi?id=1160710> || [Bug 1159253] GlusterFS 3.6.1 tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1159253>
12:36 glusterbot New news from newglusterbugs: [Bug 1165140] hardcoded gsyncd path causes geo-replication to fail on non-redhat systems <https://bugzilla.redhat.com/show_bug.cgi?id=1165140> || [Bug 1165142] hardcoded gsyncd path causes geo-replication to fail on non-redhat systems <https://bugzilla.redhat.com/show_bug.cgi?id=1165142> || [Bug 1163561] A restarted child can not clean files/directories which were deleted while down <https://bugzilla.redhat.com/show
12:36 SOLDIERz___ joined #gluster
12:42 mojibake joined #gluster
12:46 plarsen joined #gluster
12:59 David_H_Smith joined #gluster
12:59 Fen1 joined #gluster
13:06 Slashman joined #gluster
13:08 RameshN joined #gluster
13:09 deniszh joined #gluster
13:10 tdasilva joined #gluster
13:25 atalur joined #gluster
13:27 ctrianta joined #gluster
13:29 smohan joined #gluster
13:36 glusterbot New news from newglusterbugs: [Bug 1138841] allow the use of the CIDR format with auth.allow <https://bugzilla.redhat.com/show_bug.cgi?id=1138841>
13:37 anoopcs joined #gluster
13:44 topshare joined #gluster
13:47 shubhendu joined #gluster
13:52 meghanam_ joined #gluster
13:52 meghanam joined #gluster
13:57 virusuy joined #gluster
13:57 virusuy joined #gluster
13:59 David_H_Smith joined #gluster
14:02 coredumb hi was wondering if anyone had tried to host Git repos on glusterfs, mostly if there was impact on performances
14:03 ricky-ticky1 joined #gluster
14:17 skippy do folks generally recommend (or discourage?) mounting Gluster brick LVMs with "nobarriers" if they reside on a decent hardware RAID?
14:17 dusmant joined #gluster
14:18 sputnik13 joined #gluster
14:18 ndevos coredumb: git repos (many small files) on GlusterFS won't perform good
14:18 coredumb ndevos: yeah i guess i have to benchmark it
14:18 ildefonso joined #gluster
14:19 ndevos coredumb: start a test, have coffee or lunch and see if it finished when you're back?
14:21 coredumb gosh if it's not this is bad !!! :D
14:22 ndevos coredumb: gluster and small files do not really like eachother, it is one of the major improvements that are planned
14:23 coredumb yep i know
14:23 diegows joined #gluster
14:27 NuxRo Hi guys, can anyone translate this for me in English? 0-nfs-nfsv3: XID: d3248136, LOOKUP: NFS: 5(I/O error), POSIX: 5(Input/output error), FH: exportid 00000000-0000-0000-0000-000000000000, gfid 00000000-0000-0000-0000-000000000000
14:27 marcoceppi joined #gluster
14:28 coredumb ndevos: now i was wondering if a bigger caching could actually lower the impact... gonna wait for my second server to test it
14:28 ndevos NuxRo: the NFS/LOOKUP procedure with transaction-id d3248136, used a filehandle for a volume with ID 00000000-0000-0000-0000-000000000000 and GFID 00000000-0000-0000-0000-000000000000
14:29 ndevos coredumb: I doubt it
14:29 coredumb ndevos: ok
14:29 ndevos coredumb: but in the end, it all is about the performance that you deem acceptible
14:30 coredumb indeed
14:30 ndevos NuxRo: normally the exportid/volume-id would not be 00000000-0000-0000-0000-000000000000 - that is a little strange
14:31 NuxRo ndevos: does it point to a proble somewhere? On the client I get this when trying to change dir: -bash: cd: 2031: Input/output error
14:31 NuxRo i think it's split brain or smth like this
14:32 calisto joined #gluster
14:33 ndevos NuxRo: what kind of client is that?
14:34 NuxRo nfs v3 from linux
14:34 ndevos NuxRo: hmm, maybe a split-brain... I never payed attention on how that is relayed to the nfs-client
14:34 NuxRo hm, info split-brain on that volume does not show any such dirs of the gfid of that dir
14:35 NuxRo getfattr -n trusted.gfid --absolute-names -e hex <- that's how you get the gfid, right?
14:35 ndevos NuxRo: can you drop the caches of that client, and try again? "echo 3 > /proc/sys/vm/drop_caches"
14:35 ndevos yes, doing that on the brick should give you the gfid
14:36 NuxRo i can try, though it was rebooted last night and problem did not go away
14:36 ndevos oh, thats weird
14:36 ndevos do other clients use that same directory?
14:37 ndevos NuxRo: could it be that the dir has a different gfid on different bricks?
14:37 NuxRo I will check this. Can I priv-msg you with more logs?
14:39 NuxRo ok, messaged you. drop caches didn't do anything and the gfid is the same on all bricks/servers
14:39 NuxRo perhaps this is because split-brained files inside the dir
14:39 NuxRo ?
14:43 hagarth joined #gluster
14:44 [Enrico] joined #gluster
14:59 David_H_Smith joined #gluster
15:06 topshare joined #gluster
15:11 jobewan joined #gluster
15:12 topshare joined #gluster
15:13 topshare joined #gluster
15:13 coredump joined #gluster
15:15 wushudoin joined #gluster
15:19 jmarley joined #gluster
15:19 _shaps_ joined #gluster
15:23 kshlm joined #gluster
15:25 DV joined #gluster
15:29 jmarley_ joined #gluster
15:30 davemc ovrclk, hagarth, thursday is fine, but I probably can't host. I've been fighting a massive head cold for the last week, and am both way behind and barely capable of working at all
15:32 hagarth davemc: ah ok, maybe move it to early next week?
15:32 hagarth davemc: hope you feel better soon!
15:33 davemc me too. And I've got a company wide talk tomorrow on gluster and the survey.
15:34 hagarth davemc: good luck. let us know if any assistance is needed for that.
15:35 davemc I'll be repeating this for a community hangout shortly as well
15:36 hagarth cool
15:36 _dist joined #gluster
15:50 daMaestro joined #gluster
16:00 David_H_Smith joined #gluster
16:27 davemc The Gluster community is please to announce the release of updated releases for the 3.4 and 3.5 family. With the release of 3.6 a few weeks ago, this is brings all the current members of GlusterFS into a more stable, production ready status.
16:27 davemc The GlusterFS 3.4.6 release is focused on bug fixes. The release notes are available at https://github.com/gluster/glusterfs/blob/v3.4.6/doc/release-notes/3.4.6.md. Download the latest GlusterFS 3.4.6 at http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.6/
16:27 davemc GlusterFS 3.5.3 is also a bug-fix oriented release. The associated release notes are at https://github.com/gluster/glusterfs/blob/v3.5.3/doc/release-notes/3.5.3.md. Download the latest GlusterFS3.5.3 at http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.3/
16:27 glusterbot Title: Index of /pub/gluster/glusterfs/3.4/3.4.6 (at download.gluster.org)
16:27 glusterbot Title: Index of /pub/gluster/glusterfs/3.5/3.5.3 (at download.gluster.org)
16:27 davemc Also, the latest GlusterFS, 3.6.1, is available for download at http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.1/
16:27 glusterbot Title: Index of /pub/gluster/glusterfs/3.6/3.6.1 (at download.gluster.org)
16:27 davemc Obviously, the Gluster community has been hard at work, and w’re not stopping there. We invite you to join in for planning the next releases.  GlusterFS 3.7 planning and GlusterFS 4.0 planning would love your input.
16:33 _Bryan_ joined #gluster
16:34 theron joined #gluster
16:40 David_H_Smith joined #gluster
16:49 Egidijus_ joined #gluster
16:49 Egidijus_ hi all
16:50 Egidijus_ i have a two server (replicate) 1 client gluster setup, i am using xfs (but i have used ext4 as well)
16:51 Egidijus_ client has mounted the gluster volume with glusterfs client and backupvolume-server
16:51 Egidijus_ if either server goes down, the client gets upset
16:51 Egidijus_ so does the other server
16:51 Egidijus_ i have changed the network timeout on the volume to 2 seconds
16:51 Egidijus_ but it seems to be much longer than that
16:52 Egidijus_ am i doing something stupid or is this supposed to work like this ?
16:54 JoeJulian @ping-timeout
16:54 glusterbot JoeJulian: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
16:54 sputnik13 joined #gluster
16:54 meghanam_ joined #gluster
16:55 meghanam joined #gluster
16:59 ProT-0-TypE joined #gluster
16:59 hagarth joined #gluster
17:01 Egidijus_ i dont have servers that die frequently
17:02 Egidijus_ but i do have servers that write a lot
17:02 Egidijus_ and if there is a delay writing, then there may be trouble
17:04 nishanth joined #gluster
17:07 Egidijus_ glusterbot
17:07 Egidijus_ !help
17:08 MrAbaddon joined #gluster
17:09 samkottler joined #gluster
17:10 Egidijus_ JoeJulian: can i expect gluster to allow the client to keep writing if one node fails immediately
17:11 Egidijus_ or does the time out have to complete ?
17:14 JoeJulian I don't think there's any way to avoid the timeout. How bad is it if you have a write lag once a year or so?
17:14 theron joined #gluster
17:35 hflai joined #gluster
17:37 saurabh joined #gluster
17:41 lalatenduM joined #gluster
17:56 DV joined #gluster
18:02 davemc joined #gluster
18:03 theron joined #gluster
18:04 Pupeno joined #gluster
18:05 sputnik13 joined #gluster
18:10 PeterA joined #gluster
18:13 sputnik13 joined #gluster
18:17 diegows joined #gluster
18:19 hflai joined #gluster
18:21 John_HPC joined #gluster
18:22 John_HPC Anyone seen duplicate directories pop up before?  http://paste.ubuntu.com/9076011/
18:22 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
18:27 sputnik13 joined #gluster
18:32 sputnik13 joined #gluster
18:32 B21956 joined #gluster
18:38 reji joined #gluster
18:38 reji Hi folks! I got "WARNING: Circular directory structure." error
18:40 rwheeler joined #gluster
18:46 JoeJulian John_HPC: Yes, but long ago. Have you tried remounting?
18:46 JoeJulian reji: I've never seen that before...
18:48 reji <@JoeJulian> i see that every time when i try to operate with glusterfs..
18:48 SFLimey joined #gluster
18:48 SFLimey left #gluster
18:49 JoeJulian reji: Very strange since that phrase isn't in the source.
18:49 semiosis rsync?
18:50 reji thats rm command.
18:50 semiosis ah
18:50 reji rm: WARNING: Circular directory structure. \n This almost certainly means that you have a corrupted file system. \n NOTIFY YOUR SYSTEM MANAGER.
18:51 hflai joined #gluster
18:51 reji <@JoeJulian> ^
18:52 reji i have coreos baremetall. Inside of it, in chroot, debian wheezy. /dev/sda10 as btrfs volume for glusterfs-server and /dev/sda11 as btrfs volume for glusterfs-client.
18:53 reji How to reproduce: mount all and run 'tar xvf linux-kernel.tar.xz'
18:54 John_HPC JoeJulian: yes I have. stop and restarted both the volume and gluster daemons as well
18:54 John_HPC have not rebooted yet
18:55 John_HPC Also running glusterfs-3.6.1-1.el5
18:57 JoeJulian John_HPC: Have you checked the client log for errors?
18:57 rotbeard joined #gluster
18:58 John_HPC [2014-11-18 18:58:21.937095] I [dht-common.c:1822:dht_lookup_cbk] 0-glustervol01-dht: Entry /tls missing on subvol glustervol01-replicate-8
18:59 John_HPC [2014-11-18 18:58:21.944583] I [dht-common.c:1822:dht_lookup_cbk] 0-glustervol01-dht: Entry /libdl.so.2 missing on subvol glustervol01-replicate-1
18:59 John_HPC filled with enteries like that
19:04 neofob joined #gluster
19:04 JoeJulian reji: Is your ubuntu install 32 bit?
19:04 JoeJulian er, debian
19:05 reji <@JoeJulian> nope. Host - CoreOS (always 64bit). Inside of it in chroot(simple chroot) debian wheezy.
19:06 JoeJulian John_HPC: Did something change before this started?
19:06 John_HPC Not that I am aware of
19:06 John_HPC seemed fine on Friday. left it alone over the weekend.
19:09 reji Latest debian package (deb http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/wheezy/apt wheezy main)
19:09 glusterbot Title: Index of /pub/gluster/glusterfs/LATEST/Debian/wheezy/apt (at download.gluster.org)
19:09 JoeJulian reji: So, what I see is that the error means that you have a directory, foo, which has a subdirectory. That subdirectory points to the same inode as foo. Typically that might look like foo/foo/foo/foo...etc. If you have something like that in your client mount, check the brick to see if it's there too.
19:11 JoeJulian John_HPC: Check $brickroot/.glusterfs/00/00/00*00 . It should be a symlink on all your bricks. If it's a directory, rmdir it.
19:12 JoeJulian That bug is *supposed* to be fixed, but just in case it's not....
19:12 John_HPC ok
19:12 reji <@JoeJulian> according "ls -laR" i didnt have foo/foo/foo. I remake ext4 instead of btrfs and check. 5 min please
19:13 JoeJulian You do realize, of course, the foo is an example filename...
19:13 JoeJulian we recommend xfs
19:14 JoeJulian iirc, there's a bug with ext4 and 3.6.1
19:15 reji https://pastee.org/79hm4      https://pastee.org/d9vs8
19:15 glusterbot Title: Paste: 79hm4 (at pastee.org)
19:15 msmith_ joined #gluster
19:16 nshaikh joined #gluster
19:16 reji original /var/lib/docker  https://pastee.org/4xjjf
19:16 glusterbot Title: Paste: 4xjjf (at pastee.org)
19:18 JoeJulian bug 1163161
19:18 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1163161 high, high, 3.6.2, skoduri, POST , With afrv2 + ext4, lookups on directories with large offsets could result in duplicate/missing entries
19:18 reji <@JoeJulian> this better, glsuterfs vs. original https://pastee.org/62yn7
19:18 glusterbot Title: Paste: 62yn7 (at pastee.org)
19:18 JoeJulian John_HPC: Could you be hitting that bug?
19:22 John_HPC JoeJulian: could be.
19:23 * JoeJulian sighs...
19:23 sputnik13 joined #gluster
19:23 JoeJulian If I'm reading this right, that's the exact same bug that was fixed back in June.
19:24 JoeJulian I wonder how that happened...
19:31 ira joined #gluster
19:32 reji <@JoeJulian> ext4 and btrfs - nothing changed. I can test only with ext{2,3,4} and btrfs. Anyway, im going to home and will be here only after ~16 hours
19:37 ttk joined #gluster
19:37 John_HPC JoeJulian: unfotunately file systems are one of my many weak points. Any way to test/fix this?
19:38 lmickh joined #gluster
19:46 Pupeno joined #gluster
19:46 Pupeno joined #gluster
19:50 B21956 joined #gluster
19:58 sputnik13 joined #gluster
20:01 n-st joined #gluster
20:10 theron joined #gluster
20:13 smohan joined #gluster
20:20 theron joined #gluster
20:24 DougBishop joined #gluster
20:30 deniszh joined #gluster
20:36 smohan_ joined #gluster
20:49 elico joined #gluster
20:55 cliluw joined #gluster
20:58 smohan joined #gluster
20:59 theron joined #gluster
21:01 _dist joined #gluster
21:01 gildub joined #gluster
21:01 dataio joined #gluster
21:01 mbukatov joined #gluster
21:04 warci joined #gluster
21:04 lava joined #gluster
21:04 MugginsM joined #gluster
21:05 warci hi all, i've got a brick from a server. Is it possible to start a volume using that brick on another server? (without rebuilding the whole .glusterfs dir?)
21:25 sputnik13 joined #gluster
21:26 smohan joined #gluster
21:30 neofob left #gluster
21:39 chirino joined #gluster
21:51 _Bryan_ joined #gluster
21:52 theron joined #gluster
21:53 badone joined #gluster
21:57 B21956 joined #gluster
22:19 elico joined #gluster
22:20 badone joined #gluster
22:34 sputnik13 joined #gluster
22:47 sputnik13 joined #gluster
22:49 ProT-0-TypE joined #gluster
22:55 MrAbaddon joined #gluster
23:21 mbukatov joined #gluster
23:25 calisto joined #gluster
23:46 chirino joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary