Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 ofu_ joined #gluster
00:35 jporterfield joined #gluster
00:40 jporterfield joined #gluster
00:46 jporterfield joined #gluster
01:02 jporterfield joined #gluster
01:07 jporterfield joined #gluster
01:14 yinyin joined #gluster
01:35 jporterfield joined #gluster
01:45 davidbierce joined #gluster
02:11 harish__ joined #gluster
02:12 jporterfield joined #gluster
02:20 davidbierce joined #gluster
02:21 mirjam joined #gluster
02:33 shyam joined #gluster
02:52 qdk joined #gluster
02:55 jporterfield joined #gluster
03:27 jporterfield joined #gluster
03:33 RameshN joined #gluster
03:33 shubhendu joined #gluster
03:45 sticky_afk joined #gluster
03:45 stickyboy joined #gluster
03:48 hflai joined #gluster
03:54 jporterfield joined #gluster
03:59 raghu joined #gluster
04:09 glusterbot New news from newglusterbugs: [Bug 1047416] Feature request (CLI): Add options to the CLI that let the user control the reset of stats <https://bugzilla.redhat.co​m/show_bug.cgi?id=1047416>
04:10 aravindavk joined #gluster
04:22 ngoswami joined #gluster
04:22 ndarshan joined #gluster
04:25 yinyin joined #gluster
04:31 kanagaraj joined #gluster
04:40 bala joined #gluster
04:41 shyam joined #gluster
04:56 MiteshShah joined #gluster
05:09 stickyboy joined #gluster
05:12 ppai joined #gluster
05:20 spandit joined #gluster
05:26 vpshastry joined #gluster
05:28 psharma joined #gluster
05:28 vpshastry left #gluster
05:32 davinder joined #gluster
05:37 dusmant joined #gluster
05:41 lalatenduM joined #gluster
05:45 bala joined #gluster
05:47 jporterfield joined #gluster
05:51 satheesh joined #gluster
05:57 jporterfield joined #gluster
05:59 davidbie_ joined #gluster
06:02 jporterfield joined #gluster
06:04 shyam joined #gluster
06:10 jporterfield joined #gluster
06:14 aravindavk joined #gluster
06:14 psharma joined #gluster
06:15 jporterfield joined #gluster
06:21 jporterfield joined #gluster
06:22 CheRi joined #gluster
06:27 vimal joined #gluster
06:28 jporterfield joined #gluster
06:35 spandit joined #gluster
06:38 marcoceppi joined #gluster
06:38 marcoceppi joined #gluster
06:38 jporterfield joined #gluster
06:40 shyam joined #gluster
06:51 harish__ joined #gluster
06:51 xymox joined #gluster
06:51 F^nor joined #gluster
06:56 jporterfield joined #gluster
07:01 F^nor joined #gluster
07:09 xymox joined #gluster
07:14 shyam joined #gluster
07:14 vimal joined #gluster
07:41 ekuric joined #gluster
07:46 xymox joined #gluster
07:57 bala joined #gluster
08:00 davinder2 joined #gluster
08:01 ngoswami joined #gluster
08:02 ctria joined #gluster
08:11 itisravi joined #gluster
08:14 ngoswami joined #gluster
08:20 ndarshan joined #gluster
08:24 satheesh joined #gluster
08:32 bala joined #gluster
08:35 satheesh joined #gluster
08:36 _br_ joined #gluster
08:39 hflai joined #gluster
08:53 bala joined #gluster
09:01 ctria joined #gluster
09:06 vpshastry joined #gluster
09:10 glusterbot New news from newglusterbugs: [Bug 1040355] NT ACL : User is able to change the ownership of folder <https://bugzilla.redhat.co​m/show_bug.cgi?id=1040355>
09:14 jporterfield joined #gluster
09:19 jporterfield joined #gluster
09:24 shyam joined #gluster
09:31 xymox joined #gluster
09:36 atrius joined #gluster
09:39 bala joined #gluster
09:43 samppah_ semiosis: ping? you are using nagios check-log with -O /dev/null.. how do you handle the logs after errors? just logrotate?
09:49 bolazzles joined #gluster
10:04 xymox joined #gluster
10:09 mohankumar__ joined #gluster
10:19 _br_ joined #gluster
10:22 dusmant joined #gluster
10:23 tryggvil joined #gluster
10:26 xymox joined #gluster
10:36 xymox joined #gluster
10:41 psharma joined #gluster
10:46 NuxRo joined #gluster
10:49 xymox joined #gluster
10:50 shubhendu joined #gluster
10:51 ndarshan joined #gluster
11:06 xymox joined #gluster
11:16 pk1 joined #gluster
11:21 RedShift joined #gluster
11:24 jporterfield joined #gluster
11:29 psharma joined #gluster
11:33 calum_ joined #gluster
11:42 pk1 left #gluster
11:50 aravindavk joined #gluster
11:57 xymox joined #gluster
12:00 jporterfield joined #gluster
12:08 xymox joined #gluster
12:16 psyl0n joined #gluster
12:20 jporterfield joined #gluster
12:21 trickyhero joined #gluster
12:22 trickyhero left #gluster
12:23 aravindavk joined #gluster
12:28 foster joined #gluster
12:37 xymox joined #gluster
12:41 F^nor joined #gluster
12:52 psyl0n joined #gluster
12:59 xymox joined #gluster
13:01 jporterfield joined #gluster
13:07 jporterfield joined #gluster
13:14 mkzero joined #gluster
13:15 mkzero joined #gluster
13:20 jporterfield joined #gluster
13:26 jporterfield joined #gluster
13:38 jporterfield joined #gluster
13:46 bolazzles joined #gluster
13:50 xymox joined #gluster
14:13 psyl0n joined #gluster
14:17 xymox joined #gluster
14:29 xymox joined #gluster
14:41 cfeller joined #gluster
14:51 xymox joined #gluster
14:54 InnerFIRE joined #gluster
14:54 InnerFIRE hello
14:54 glusterbot InnerFIRE: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:56 samppah howdy
14:56 InnerFIRE I am having a strange issue where the gluster debian wheezy mirror is refusing to update because of conflicts with the libc version
14:56 shyam joined #gluster
14:56 InnerFIRE is there a better mirror than download.gluster.org?
14:57 CROS_ joined #gluster
15:00 samppah InnerFIRE: what version you are using/updating?
15:01 InnerFIRE currently 3.3.1
15:06 InnerFIRE trying to upgrade to LATEST
15:08 samppah @ppa
15:08 glusterbot samppah: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k
15:08 samppah i'm not sure about status of debian packages but ppa packages should work with debian too
15:08 samppah semiosis probably know better :)
15:08 InnerFIRE I have a strange feeling that the debian wheezy packages were built for ubuntu and never tested
15:09 InnerFIRE on debian
15:10 InnerFIRE I can tell you from experiance that Ubuntu repos tend not to like debian stable
15:10 cfeller I've using the Wheezy 3.4.1 clients packages on debian, w/out issue.  and I'm using download.gluster.org
15:11 InnerFIRE hmmm
15:11 cfeller what error are you seeing?
15:12 InnerFIRE The following packages have been kept back: glusterfs-client glusterfs-common glusterfs-server
15:12 InnerFIRE and
15:12 InnerFIRE glusterfs-common : Depends: libc6 (>= 2.14) but 2.13-38 is to be installed
15:12 InnerFIRE Depends: liblvm2app2.2 (>= 2.02.98) but 2.02.95-8 is to be installed
15:13 InnerFIRE Depends: librdmacm1 (>= 1.0.16) but 1.0.15-1+deb7u1 is to be installed
15:13 InnerFIRE Breaks: glusterfs-server (< 3.4.0~qa5) but 3.3.1-1 is to be installed
15:13 cfeller and if you run apt-get dist-upgrade ?
15:13 InnerFIRE still keeps those 3 packages back
15:15 psyl0n joined #gluster
15:15 mkzero_ joined #gluster
15:16 cfeller what is in your /etc/apt/sources.list?  (it is looking like it is trying to pull in 3.4.0~qa5)
15:17 Onoz joined #gluster
15:18 F^nor joined #gluster
15:18 InnerFIRE debian updates for wheezy
15:18 InnerFIRE and deb http://download.gluster.org/pub/gl​uster/glusterfs/LATEST/Debian/apt wheezy
15:18 glusterbot Title: Index of /pub/gluster/glusterfs/LATEST/Debian/apt (at download.gluster.org)
15:23 cfeller hmm... OK.  I just grabbed one of my Debian client boxes that I hadn't upgraded from 3.3.2, and switched the repo to match yours.... I see:
15:23 cfeller http://ur1.ca/gag02
15:23 cfeller So no problems there...
15:23 glusterbot Title: #65126 Fedora Project Pastebin (at ur1.ca)
15:25 cfeller So I don't think the gluster packages are the problem...
15:26 cfeller and libc6 2.13-38 is the correct for wheezy, so you shouldn't need anything greater.
15:26 cfeller can you past your entire sources.list?  I'm thinking there is something else in that file that may be getting pulled.  Or, you may have files in /etc/apt/sources.list.d that are being grabbed.
15:26 InnerFIRE ah
15:27 InnerFIRE you just gave me an idea
15:27 InnerFIRE and now it's fixed
15:27 cfeller excellent....
15:27 InnerFIRE heh
15:27 cfeller culprit?
15:27 InnerFIRE sometime in the past something broke the apt repo on download.gluster
15:28 InnerFIRE and I have been stuck there ever since
15:28 InnerFIRE I removed the packages and reinstalled
15:28 InnerFIRE and it works
15:28 cfeller awesome.
15:29 InnerFIRE definitly thanks
15:30 cfeller np
15:31 bolazzles joined #gluster
15:50 genatik joined #gluster
15:51 jobewan joined #gluster
16:07 jag3773 joined #gluster
16:09 daMaestro joined #gluster
16:23 marcoceppi joined #gluster
16:23 JordanHackworth joined #gluster
16:23 randallman joined #gluster
16:23 marcoceppi joined #gluster
16:23 flrichar joined #gluster
16:24 psyl0n joined #gluster
16:24 marcoceppi joined #gluster
16:26 psyl0n joined #gluster
16:37 CROS___ joined #gluster
16:43 blaaa joined #gluster
16:44 blaaa having very strange issue that cause catastrophy and we cant find whats wronge maybee someone can help
16:45 blaaa after VOL restart 2 nodes out of 6 are having big load for days with not much work going on for them
16:46 blaaa cpu sys time of all glusterfsd's (around 10 for each node) are 60% all them time and they dont go down no mater what you do
16:46 blaaa when files are served its going to load 100
16:47 blaaa when its doing nothing its arroun load 3 and all glusterfsd's are 60% system time
16:48 samppah anything in log files?
16:48 blaaa nothing at all that can explain that
16:48 blaaa checked all bricks, clients logs etc..
16:51 blaaa im suspecting its .gluster folder/indeces that cause issues, its it possible to rebuild the indeces based on existing files with some bash
16:51 blaaa ?
16:52 blaaa looks like a very nasty bug
16:52 schrodinger_ joined #gluster
16:52 marcoceppi joined #gluster
16:52 marcoceppi joined #gluster
16:54 samppah blaaa: what version you are usng?
16:54 samppah using ext4 possibly?
16:56 randallman joined #gluster
16:56 marcoceppi joined #gluster
16:56 flrichar joined #gluster
16:56 randallman joined #gluster
16:56 blaaa yes
16:56 marcoceppi joined #gluster
16:56 glusterbot` joined #gluster
16:56 samppah @ext4
16:56 blaaa i suspected that and upraded to 3.3.2 few hours ago
16:56 blaaa didnt help
16:56 samppah what was the version before that?
16:56 glusterbot samppah: The ext4 bug has been fixed in 3.3.2 and 3.4.0. Read about the ext4 problem at http://goo.gl/Jytba or follow the bug report here http://goo.gl/CO1VZ
16:56 blaaa 3.3.1
16:56 samppah oh, ok
16:57 blaaa it is now for few hours with 3.3.2 but the same
16:58 samppah yeah, i was wondering if there was something messed with data but you should definetly see some error messages in log files
16:58 samppah ie. split brain situation or something like that
16:58 samppah @afr
16:58 glusterbot samppah: For some do's and don'ts about replication, see http://joejulian.name/blog/glust​erfs-replication-dos-and-donts/
16:58 samppah @dht
16:58 glusterbot samppah: I do not know about 'dht', but I do know about these similar topics: 'dd'
16:58 blaaa nothing show regarding that
16:59 blaaa gluster volume heal GRID1 info split-brain show 0
16:59 samppah does info show anythig or info heal-failed?
17:00 blaaa not also, nothing special at all
17:01 blaaa Number of entries: 1
17:01 blaaa <gfid:242508a3-b225-45fb-9be6-67d521c12322>
17:02 blaaa on one brick that is not on the problematic replica
17:05 vpshastry joined #gluster
17:06 blaaa does the ext4 bug fixed in version 3.3.2?
17:06 vpshastry left #gluster
17:07 samppah afaik it should be
17:11 Alex whelp, it's still broken in 3.3.1-1, but I know that's not your question ;-)
17:11 Alex (sorry!)
17:14 glusterbot New news from newglusterbugs: [Bug 1023191] glusterfs consuming a large amount of system memory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1023191>
17:21 xymox joined #gluster
17:37 michaelholley joined #gluster
17:37 michaelholley I have a quick question if someone can help.
17:38 samppah depends on question :)
17:38 michaelholley I've done some searching and haven't been able to find anything definitive. What protocol does the built in NFS server in Gluster use? 3, 4 or both?
17:38 samppah 3
17:38 samppah tcp
17:38 blaaa again load 80 on 2 nodes with not clients
17:39 michaelholley Okay, in our environment we don't use v4 so that that's great, thanks samppah.
17:39 samppah michaelholley: np :)
17:44 Mo__ joined #gluster
17:48 semiosis InnerFIRE: give me details of the version mismatch please
17:49 samppah semiosis: i think he figured that out
17:49 samppah gluster packages are fine if i understood correctly
17:49 semiosis ah i see now
17:49 semiosis great
17:50 samppah semiosis: did you notice my question about nagios check_log earlier?
17:50 SFLimey_ joined #gluster
17:50 semiosis did not
17:50 * semiosis scrolls back
17:51 semiosis yes, logrotate -f /etc/logrotate.conf, to be exact
17:51 semiosis cheap but effective
17:51 samppah ah, okay
17:52 semiosis i really never get any errors in the gluster logs except when i do maintenance
17:52 semiosis it's shockingly stable
17:52 pureflex joined #gluster
17:56 samppah semiosis: yeah, i have been checking log files manually :P there isn't much noise if everything is fine
17:56 samppah but now i really need to set up automatic log monitoring :)
17:57 semiosis logstash
17:57 blaaa nobody had the issue? all glusterfsd's processes are at 80% proc time for days.. the storage cant function anymore like that
17:57 samppah currently using it for other stuff but not yet configured for gluster logs
18:13 blaaa that great, stack with gluster going crazy on me and no indication whats so ever as what ius wrong.
18:14 samppah blaaa: have you tried checking those processes with strace?
18:19 blaaa yes, it looks the same as tracing good bricks
18:19 blaaa good bricks are bricks that work on 5% proc time when they dont work much
18:19 samppah okay
18:20 blaaa the bad bricks are working 30-80% for days
18:20 samppah are these all on same machine?
18:20 samppah and has all processes been restarted after upgrading to 3.3.2?
18:21 semiosis is the cpu running or in iowait?
18:21 semiosis vmstat?
18:21 semiosis i mean, iostat
18:21 blaaa all the bad bricks are at the same replica set.. sure i restared glusterd + volume stop/start just to be sure
18:21 samppah ok
18:21 blaaa no IO at all that what strange
18:22 blaaa no iowait, vmstat show no swaping, no IO
18:22 semiosis kill the brick process & restart glusterd on that machine
18:22 semiosis which will respawn the killed proc
18:22 blaaa didnt help ither, few reboots allready
18:22 blaaa return
18:22 semiosis this doesnt add up
18:22 semiosis if strace doesnt say the process is doing anything
18:22 semiosis and there's no IO load on the process
18:22 samppah what's the actual cmd line of process going crazy?
18:23 semiosis then the process *isn't* doing anything
18:23 blaaa no, strace show running lines very fast but the same as good brick
18:23 semiosis then how can there be no io>
18:23 semiosis ?
18:23 blaaa readv(31, [{"\200\0\1\224", 4}], 1)     = 4
18:23 blaaa readv(31, [{"\0\10A\317\0\0\0\0", 8}], 1) = 8
18:23 blaaa readv(31, [{"\0\0\0\2\0\23\320\5\0\0\1J\​0\0\0\33\0\5\363\227\0\0\0$", 24}], 1) = 24
18:23 blaaa readv(31, [{"\0\0](\0\0\0!\0\0\0!\0\0\0\2\0\0​\0!\0\0\7\320\0\0\0\10\0\0\0\0"..., 372}], 1) = 372
18:23 blaaa futex(0x132ce9c, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x132ce98, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
18:23 blaaa futex(0x132ce70, FUTEX_WAKE_PRIVATE, 1) = 1
18:23 blaaa lots of them
18:24 semiosis set the brick log to debug level, maybe that will put something interesting in the logs
18:26 blaaa ok ill try, i want to first uprade to the latest, maybee the majur AFR update will do something cause im running out of options
18:26 blaaa 3.4
18:27 nonsenso joined #gluster
18:27 blaaa it happen on just 1 brick on other replica set in the morning and ws fixed after i stoped gluster on set server 1, delete all the folder then started
18:27 blaaa after that it replicate back and was fixed
18:27 blaaa but now i have this on 10 bricks(all bricks on other replica set)
18:29 blaaa i see a bit starnge thing at vmstat it show 3-2o on the r value.. threads in ready stat. but 0 IO
18:29 JoeJulian blaaa: readv(31, [{"\200\0\1\224",... if you lsof -p {same pid as above} you can see which file is being accessed with fd 31.
18:30 blaaa fd31 is?
18:30 Staples84 joined #gluster
18:30 JoeJulian readv(31
18:30 blaaa hd?
18:31 JoeJulian fd=file descriptor
18:34 blaaa it show just a file, nothing special
18:34 blaaa i use ps axf | grep glusterfsd | grep -v grep | awk '{print "ls -l /proc/" $1 "/fd/ "}' | sh
18:35 blaaa to see all files on all glusterfsd's
18:35 blaaa just 20 normal connections(E.G not system files and sockets) its houldnmt be on high load
18:36 JoeJulian pgrep glusterfsd | xargs -n 1 lsof -p
18:36 JoeJulian But anyway... you're file 31 was the one showing in your strace. What makes that file special?
18:37 JoeJulian why would it be so busy as to be causing your cpu spike?
18:39 JoeJulian ... or do you think that fd showing up was just a result of a random sample and is meaningless?
18:39 blaaa it was just last output but the fds changes all the time. it shouldt be so buisy. only 20 connections
18:43 JoeJulian Perhaps it shouldn't be, but it apparently is. The question is, why? Have you checked with iotop to see that it really is glusterfsd that's causing the io?
18:44 blaaa there is no IO at all, no IO no iowait
18:45 blaaa that what starnge, its not the disks. its soemthing logic with gluster and the kernel as the cpu is doint sys time not user time
18:46 JoeJulian Is this also a client, or just a server?
18:47 blaaa just servers orcourse
18:49 JoeJulian Then if we're not using user time, we should be in fuse or i/o. A self-heal or a rebalance would use fuse.
18:54 blaaa fuse is not mounted on the servers and there is no IO at all, i mean 10K read is realy nothing
18:55 blaaa i tried to disable self-heal just to see if it helps and its not helping
18:55 blaaa waited fo half hour but no, still the same
18:56 blaaa rebalance is not working ither
18:57 blaaa all the bricks show 20%-60% system time with top and i cant see why
18:58 blaaa all of them with the same % more or less
18:58 blaaa when it goes down to 19% it all of them etc..
18:59 blaaa all of them with 0% memory
19:01 blaaa now its stody on 15% avg for 5 minuts but still its too much and it will change soon
19:01 blaaa all the bricks same %
19:03 blaaa 30%, what the hell is it
19:04 blaaa im sure its the AFR cause its the same with the 2 replica servers
19:05 JoeJulian Replication happens on the client.
19:05 blaaa i know :)
19:05 blaaa self-heal no
19:06 JoeJulian "pkill -f glustershd" to ensure the self-heal daemon isn't running.
19:07 blaaa then restart gluster?
19:08 JoeJulian Not if you just want to see if that's where the load is coming from.
19:08 JoeJulian When you do want to restart it, then yes. Restart glusterd.
19:09 blaaa i see it with top, its 10 proccess of glusterfsd each with 20-60% proc time
19:09 blaaa on both servers
19:09 blaaa in that replica set, they rest of the sets are fine
19:10 JoeJulian 10? Do you have 10 volumes on those servers?
19:10 JoeJulian No... wait...
19:11 JoeJulian Wouldn't matter. There's only one glustershd per server....
19:11 blaaa 10 bricks
19:11 JoeJulian Oh, nevermind. I misread that since you changed subjects.
19:11 blaaa glusterfsd
19:12 blaaa glusterfsd for each brick
19:12 JoeJulian yep
19:12 JoeJulian You're not in swap are you?
19:13 blaaa the glusterd doesnt do much in term of % cpu now
19:13 blaaa :) no
19:13 blaaa lots of free memory
19:15 blaaa this is realy bad now, i just can upgarde to version 3.4 and hope
19:16 JoeJulian Sounds like fun.
19:18 blaaa its not, it would be many lost clients
19:20 JoeJulian Plus, you don't know where your problem lies, so you'll never know if you're just wasting your time and causing your users frustration.
19:20 blaaa yes
19:20 blaaa gluster decided to go mad and nothing to do
19:20 JoeJulian So, when your cpu utilization is over 60%, what problems are your client seeing?
19:21 NuxRo joined #gluster
19:22 blaaa the problem gets nasty when the connection rising, on this replica set its up to 150(load) while on other replica sets that get the same amount of connections itsload 5-8 at most
19:23 blaaa now the load is 1-2 so things function but when the load is up on this 2 nodes its causing the entire volume to alt and be very not responsive
19:24 blaaa it make sense that this issue is related to the high load on connection rise so if i fix the cpu sys time issue it would probebly be normal on load time as well as its ok on the rest of the sets
19:26 JoeJulian An strace shows you what the process is doing. The snippet you pasted into the channel shows mostly iops. If you looked at a larger sample (which you would not paste in channel, of course), you should be able to pretty quickly see a pattern if it's stuck doing something it shouldn't. If it's iops, then it's iops. Perhaps use wireshark to see what's doing all the io.
19:28 JoeJulian If it IS all i/o, perhaps try the deadline or noop schedulers and see if that makes any difference to your load.
19:29 blaaa iostat show 10k read 0 write, vmstat also not much IO
19:29 blaaa and why is it the same on all bricks
19:30 blaaa same % of cpu time
19:30 JoeJulian because all the bricks are performing the same operation.
19:30 blaaa they have diffrent files
19:30 JoeJulian Then it's probably directory related.
19:30 blaaa its not serving files for sure
19:31 InnerFIRE now to fix the /l
19:31 blaaa i suspect its the sylinks indeces
19:31 InnerFIRE sorry wromng window
19:31 InnerFIRE left #gluster
19:31 blaaa but im not sure what
19:31 JoeJulian Could something be making a lot of directories?
19:31 blaaa i can only think of the .gluster symlinks folder
19:32 blaaa the index
19:32 JoeJulian Seems unlikely. Only directories are created equally across distribute subvolumes.
19:33 JMWbot joined #gluster
19:33 JMWbot I am JMWbot, I try to help remind johnmark about his todo list.
19:33 JMWbot Use: JMWbot: @remind <msg> and I will remind johnmark when I see him.
19:33 JMWbot /msg JMWbot @remind <msg> and I will remind johnmark _privately_ when I see him.
19:33 JMWbot The @list command will list all queued reminders for johnmark.
19:33 JMWbot The @about command will tell you about JMWbot.
19:33 blaaa so it ahd to be the same on the rest of the replication pairs
19:34 JoeJulian I don't understand that statement in context.
19:36 blaaa i mean if its VOLUME wide(directory creation) then it had to be on the rest of the servers(replica sets)
19:36 andreask joined #gluster
19:36 JoeJulian Which is what you said was happening.
19:36 blaaa its just on 2 servers
19:37 JoeJulian I'm going to go feed my daughter breakfast. bbl
19:37 blaaa in a replica pairs, the rest are showing healthy status, e.g glusterfsd processes are 0% mostly
19:38 blaaa :) nice
19:59 psyl0n joined #gluster
19:59 jag3773 joined #gluster
20:44 mkzero joined #gluster
21:12 tg2 joined #gluster
21:31 jobewan joined #gluster
21:40 jporterfield joined #gluster
22:28 psyl0n joined #gluster
22:42 mkzero joined #gluster
23:00 psyl0n joined #gluster
23:56 jag3773 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary