Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:28 MugginsM joined #gluster
00:52 russoisraeli joined #gluster
00:54 Wizek joined #gluster
01:08 akay joined #gluster
01:08 harish joined #gluster
01:09 akay hi guys, anyone had any problems with ubuntu rolling out samba 4.3.8 and samba-vfs-modules not working with it?
01:24 mowntan joined #gluster
01:26 EinstCrazy joined #gluster
01:26 Wizek joined #gluster
02:00 EinstCra_ joined #gluster
02:14 Lee1092 joined #gluster
03:13 mowntan joined #gluster
03:17 russoisraeli joined #gluster
03:23 overclk joined #gluster
03:38 luizcpg joined #gluster
03:41 hchiramm joined #gluster
03:47 RameshN joined #gluster
03:52 ashiq joined #gluster
03:56 PaulCuzner joined #gluster
04:03 dlambrig_ joined #gluster
04:03 ashiq joined #gluster
04:04 vmallika joined #gluster
04:05 shubhendu joined #gluster
04:06 hackman joined #gluster
04:12 d-fence joined #gluster
04:13 the-me joined #gluster
04:15 nbalacha joined #gluster
04:24 atalur joined #gluster
04:27 tru_tru joined #gluster
04:35 valkyr1e joined #gluster
04:43 fcoelho joined #gluster
04:45 kshlm joined #gluster
04:50 nishanth joined #gluster
04:52 ashiq joined #gluster
05:01 nehar joined #gluster
05:08 ndarshan joined #gluster
05:11 gem joined #gluster
05:17 hgowtham joined #gluster
05:20 nehar joined #gluster
05:20 Manikandan_ joined #gluster
05:21 karthik___ joined #gluster
05:24 PotatoGim_ joined #gluster
05:24 Apeksha joined #gluster
05:25 ashiq joined #gluster
05:26 PotatoGim_ joined #gluster
05:28 PotatoGim joined #gluster
05:31 ppai joined #gluster
05:32 karnan joined #gluster
05:33 aravindavk joined #gluster
05:35 overclk joined #gluster
05:35 rafi joined #gluster
05:36 aspandey joined #gluster
05:41 Bhaskarakiran joined #gluster
05:50 skoduri joined #gluster
05:51 Bhaskarakiran joined #gluster
05:55 skoduri joined #gluster
05:56 poornimag joined #gluster
05:57 hchiramm joined #gluster
05:58 Gnomethrower joined #gluster
06:07 nehar joined #gluster
06:09 mhulsman joined #gluster
06:11 ramky joined #gluster
06:11 DV_ joined #gluster
06:17 hackman joined #gluster
06:25 spalai joined #gluster
06:27 kdhananjay joined #gluster
06:28 rouven joined #gluster
06:34 jtux joined #gluster
06:35 anil_ joined #gluster
06:37 overclk_ joined #gluster
06:40 aravindavk joined #gluster
06:40 nbalacha joined #gluster
06:41 rastar joined #gluster
06:41 nbalacha joined #gluster
06:44 gem joined #gluster
06:50 spalai joined #gluster
06:50 kovshenin joined #gluster
06:50 deniszh joined #gluster
07:03 hchiramm joined #gluster
07:09 jri joined #gluster
07:16 jri joined #gluster
07:17 [Enrico] joined #gluster
07:18 gem joined #gluster
07:21 kovshenin joined #gluster
07:25 unforgiven512 joined #gluster
07:26 mmckeen joined #gluster
07:28 fsimonce joined #gluster
07:31 ctria joined #gluster
07:37 Slashman joined #gluster
07:39 DV_ joined #gluster
07:40 goretoxo joined #gluster
07:40 morse_ joined #gluster
07:47 karthik___ joined #gluster
07:48 wnlx joined #gluster
07:48 arcolife joined #gluster
07:59 [Enrico] joined #gluster
08:03 anil_ joined #gluster
08:10 [diablo] joined #gluster
08:14 hackman joined #gluster
08:18 atalur joined #gluster
08:18 jwd joined #gluster
08:21 [Enrico] joined #gluster
08:25 auzty joined #gluster
08:30 pur_ joined #gluster
08:42 Bhaskarakiran joined #gluster
08:49 sakshi joined #gluster
08:56 nbalacha joined #gluster
09:00 spalai joined #gluster
09:08 harish joined #gluster
09:13 Raide joined #gluster
09:15 Raide Hi! Does anyone have experience with Gluster 3.5.x on CentOS 6.x and 4Kn drives? I'm using Avago MegaRAID SAS 9361-8i and XFS for the bricks.
09:20 Wizek joined #gluster
09:23 karthik___ joined #gluster
09:26 [Enrico] joined #gluster
09:40 post-factum Raide: i guess you would be interested in asking more specific question
09:41 post-factum Raide: in general, it is highly unlikely for anyone else to have the same setup as yours
09:43 Debloper joined #gluster
09:46 Raide post-factum: OK, so what about using 4Kn drives with Gluster in general? Any known problems? Best practices?
09:50 Wizek joined #gluster
09:50 post-factum Raide: it should just work
09:52 nbalacha joined #gluster
10:09 nbalacha joined #gluster
10:12 shubhendu joined #gluster
10:19 TvL2386 joined #gluster
10:25 akay Just in case anyone else has problems with the Ubuntu Trusty update of Samba to 4.3.8 - monotek has updated samba-vfs-modules - this should get you back up and running
10:26 nbalacha joined #gluster
10:35 nbalacha joined #gluster
10:35 pur_ joined #gluster
10:49 baoboa joined #gluster
11:11 gem joined #gluster
11:13 karthikus joined #gluster
11:13 johnmilton joined #gluster
11:21 Apeksha joined #gluster
11:25 nottc joined #gluster
11:30 rwheeler joined #gluster
11:35 rastar gluster weekly meeting will start in about 25 mins, please join #gluster-meeting to participate
11:37 bennyturns joined #gluster
11:45 cholcombe joined #gluster
11:49 ppai joined #gluster
11:50 luizcpg joined #gluster
11:50 cyberbootje joined #gluster
11:51 Saravanakmr joined #gluster
12:02 Manikandan_ joined #gluster
12:17 Manikandan joined #gluster
12:24 ppai joined #gluster
12:31 tg2 joined #gluster
12:34 amye joined #gluster
12:38 jiffin joined #gluster
12:39 Slashman joined #gluster
12:49 Manikandan joined #gluster
12:53 R0ok_ joined #gluster
12:54 squizzi_ joined #gluster
12:57 unclemarc joined #gluster
13:02 spalai left #gluster
13:07 amye joined #gluster
13:09 russoisraeli joined #gluster
13:15 Wizek_ joined #gluster
13:16 [1]akay joined #gluster
13:17 ninkotech_ joined #gluster
13:19 marlinc joined #gluster
13:19 tswartz joined #gluster
13:22 rouven joined #gluster
13:33 jobewan joined #gluster
13:33 dlambrig_ joined #gluster
13:37 Hesulan joined #gluster
13:46 mowntan joined #gluster
13:50 skylar joined #gluster
13:52 mowntan joined #gluster
14:00 mpietersen joined #gluster
14:02 mpietersen joined #gluster
14:03 plarsen joined #gluster
14:03 plarsen joined #gluster
14:04 R4yTr4cer joined #gluster
14:05 jwd joined #gluster
14:07 nbalacha joined #gluster
14:09 jlp1 joined #gluster
14:14 shubhendu joined #gluster
14:19 hchiramm joined #gluster
14:22 tru_tru joined #gluster
14:26 luizcpg joined #gluster
14:36 cristian_ joined #gluster
14:43 squizzi_ joined #gluster
14:45 kpease joined #gluster
14:45 farhorizon joined #gluster
14:47 hgichon joined #gluster
14:48 wushudoin joined #gluster
14:49 wushudoin joined #gluster
14:57 archit_ joined #gluster
15:03 [Enrico] joined #gluster
15:06 ppai_ joined #gluster
15:08 nehar joined #gluster
15:26 kpease joined #gluster
15:32 jugaad joined #gluster
15:35 spalai joined #gluster
15:39 armyriad joined #gluster
15:39 chirino_m joined #gluster
15:49 spalai left #gluster
16:02 bennyturns joined #gluster
16:09 shubhendu joined #gluster
16:15 rafi joined #gluster
16:32 chirino joined #gluster
16:32 ivan_rossi left #gluster
16:35 rafi joined #gluster
16:36 tertiary joined #gluster
16:39 rafi joined #gluster
16:51 spalai joined #gluster
16:55 netzapper joined #gluster
16:56 netzapper hi, I'm trying to report a bug on the redhat bugzilla for glusterfs... but we're using version 3.7.10... and the only options in the form are 2.1, 3.0, and 3.1. Which should I choose?
16:57 JoeJulian file a bug
16:57 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
16:57 JoeJulian Try that link. ^
16:57 netzapper ah, that's better
16:57 netzapper are 3.7.11 and 3.8 publicly available?
16:58 JoeJulian Also, 3.7.11 is.
16:58 rafi joined #gluster
16:58 netzapper well, maybe we should try that before I report
16:58 JoeJulian Or at least check the release notes.
16:59 netzapper yeah, I have a feeling you didn't mean to fix this bug if you did.
16:59 netzapper :)
16:59 JoeJulian https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.11.md
16:59 glusterbot Title: glusterfs/3.7.11.md at release-3.7 · gluster/glusterfs · GitHub (at github.com)
16:59 JoeJulian I'm not a dev.
16:59 netzapper oh
16:59 JoeJulian Is it something interesting? :)
17:20 netzapper in production, we (twice a day?) occasionally get an unkillable process in state R with 100 %CPU in top. `perf` shows it's stuck in `unlock_page` in the kernel (http://hastebin.com/uyapuwuxux.css).
17:20 glusterbot Title: hastebin (at hastebin.com)
17:21 netzapper same basic symptoms as this guy: http://unix.stackexchange.com/questions/258896/process-stuck-in-kernel-code-what-is-it-doing
17:21 glusterbot Title: performance - Process stuck in kernel code, what is it doing? - Unix & Linux Stack Exchange (at unix.stackexchange.com)
17:24 rafi joined #gluster
17:33 shaunm joined #gluster
17:38 gbox netzapper:  How does gluster figure into your problem?  If you notice that StackExchange post was closed as too broad.
17:39 netzapper gbox: process of elimination. There's no other fuse filesystem used on the cluster, and yet that is where it always stops.
17:50 gbox netzapper: what distribution+release are you using?  Have you tried changing mount options (increased logging for example)?  I would guess that glusterfs-fuse is one of the more stable components
17:53 netzapper Xubuntu 14.04 (we use the desktop version because our task is OpenGL accelerated and it was easier to set up than adding X+GL to the server distro). We also thought glusterfs was the stable component... I spent days trying to blame this on the proprietary nVidia drivers. But it always freezes in a stack frame above `fuse_perform_write`, and trace outputs from the process show that the task is just about to write some more data t
17:53 netzapper o the shared filesystem
17:54 netzapper I think the issue is related to the striped storage configuration we're using, so we're rebuilding the volume with distributed+replicated instead.
17:55 netzapper I mean, the actual issue is that there's a bug that creates an unkillable process. That's an actual bug either in gluster or the kernel. But, we're trying to mitigate it by turning off the feature.
17:55 hackman joined #gluster
17:57 gbox netzapper: Ah yeah I think the stripe feature is semi-deprecated
18:02 mhulsman joined #gluster
18:02 rafi joined #gluster
18:04 netzapper I love "semi-deprecated" features... "Uh, yeah, that's in there... but, like, we don't know anybody who uses it..."
18:06 gbox netzapper:  It seemed like a good idea but the hot tier feature seems the likely fix for performance.  In your case I wonder are you using OpenGL for graphics or just to boost performance?
18:07 netzapper we do highly data-parallel image processing and analysis. We use a bunch of the imaging features in OpenGL.
18:09 gbox netzapper:  OK I do some of that too.  There are others (like Shazam) that use GPU processing for speed.  Does your application use a local, in-memory copy and then read/write from gluster?
18:10 netzapper in-memory copy of what?
18:10 gbox netzapper: the images
18:11 netzapper oh hell no. They're *enormous*, like several GiB decompressed. We use a progressive rendering system with tiled images.
18:12 netzapper so we're constantly streaming in image data, then writing out little chunks of output image... then we have separate "gather" steps that stitch the image chunks back into full-size images (while also writing acceleration data, etc.).
18:13 gbox netzapper:  I actually pull the image from gluster, work on it writing the intermediate copy to memory, then put the finished version back in gluster.  Memory is cheap, you can easily have 256GB+ in a server/workstation.
18:13 netzapper and we do.
18:14 netzapper ...have that much memory in each node.
18:14 wnlx joined #gluster
18:15 netzapper but any particular analysis task only needs a tiny piece of the image.
18:15 netzapper so it's much better in terms of CPU utilization to have every task only grab as much data as it needs.
18:16 netzapper if gluster isn't capable of those kinds of workloads, it means we chose the wrong tool. I'm not going to re-write the analysis kernels around the inadequacies of a filesystem.
18:16 netzapper especially since what we ask is not unreasonable.
18:18 netzapper I feel like if I spawn 5000 processes, and each one writes out into a separate file, any filesystem on the planet should be able to handle that safely. Maybe not super fast, or in true parallel, but it shouldn't create an unkillable zombie process.
18:18 gbox netzapper:  I have just noticed gluster handles whole files better.  rsync diff algorithm caused gfid mismatches for me on replicated volumes.  High concurrency applications might also cause problems.
18:18 netzapper fair enough.
18:19 netzapper we're not doing partial file updates in the portion of the code that's freezing.
18:19 netzapper it's literally "open file, dump byte vector, close file".
18:19 netzapper not even append.
18:19 gbox netzapper: Sure I think your problem is stripe related and possibly Ubuntu fuse & kernel related.
18:20 netzapper if the stripes are semi-deprecated, then that's probably my problem. :)
18:20 R4yTr4cer joined #gluster
18:20 rafi joined #gluster
18:21 gbox netzapper: Ha, nicely stated.  But gluster can't keep the kernel from killing a process.  However the fuse module could: http://www.gluster.org/community/documentation/images/thumb/2/2f/FUSE-access.png/500px-FUSE-access.png
18:23 netzapper right, I know it's not literally the gluster code that's faulting... and it could be a general defect in FUSE, since there's no indication in the SE posting that they're using gluster.
18:24 netzapper but if I report the gluster bug, people who actually know the gluster code will be able to diagnose if it's in gluster (perhaps giving bad info to the kernel), or if it's in the kernel module itself (because gluster is faultless and the kernel fumbles it).
18:25 netzapper I dunno. My guy's reinstalling with 3.7.11 and switching to distrib+replicate... we'll see if things improve3.
18:28 gbox netzapper:  good plan, good luck
18:29 netzapper it's just super annoying because we had a host of our own concurrency bugs... and now this is the showstopper. :(
18:32 spalai joined #gluster
18:33 R4yTr4cer joined #gluster
18:45 marbu joined #gluster
18:46 skylar joined #gluster
18:52 spalai left #gluster
19:04 hagarth joined #gluster
19:06 dlambrig_ joined #gluster
19:09 coredump|br joined #gluster
19:09 Caveat4U joined #gluster
19:10 nishanth joined #gluster
19:11 Caveat4U I have been having an issue with an unsynced entry. I have a feeling that if I perform a heal, that would sync it, but I’m not certain. When I try to start the heal, I get “Launching heal operation to perform index self heal on volume nmd has been unsuccessful”
19:11 Caveat4U The log messages have not been very clear - so I came to you
19:24 rouven joined #gluster
19:26 JoeJulian Caveat4U: check "gluster volume status" and make sure your self-heal demons are running.
19:26 JoeJulian s/demons/daemons/
19:26 Caveat4U They are - I tried disabling and re-enabling
19:26 glusterbot What JoeJulian meant to say was: Caveat4U: check "gluster volume status" and make sure your self-heal daemons are running.
19:27 JoeJulian Maybe the demons *are* running! :D
19:27 JoeJulian Anything in the logs when taht fails?
19:29 dlambrig_ joined #gluster
19:29 Caveat4U http://paste.fedoraproject.org/357931/11805871/
19:30 glusterbot Title: #357931 Fedora Project Pastebin (at paste.fedoraproject.org)
19:32 JoeJulian Yeah, that's one log... The logs involved will be the cli.log on the machine you run the command on. All glusterd.vol.log on all participating servers. And glustershd.log on all participating servers. One of them should log the fault.
19:32 Caveat4U I’m actually tailing all 3 logs - that’s the combined messages from all logs when set to DEBUG level
19:32 JoeJulian Hopefully it'll have enough information for you to see what the failure is. I'll be back in a bit. I need to grab some lunch.
19:33 Caveat4U Let me check the glusterd.vol.log - I wasn’t tailing that one
19:36 Caveat4U I do see this message “E [MSGID: 106301] [glusterd-op-sm.c:4160:glusterd_op_ac_send_stage_op] 0-management: Staging of operation 'Volume Rebalance' failed on localhost : Rebalance not started."
19:36 Caveat4U And on a different node “W [MSGID: 106222] [glusterd-rebalance.c:699:glusterd_op_stage_rebalance] 0-management: Missing rebalance-id”
19:36 deniszh joined #gluster
19:41 Caveat4U We have 4 bricks on RedHat and 2 on Debian - I’m looking and it looks like the versions are mismatched (3.7.11 on RedHat and 3.7.8 on Debian). We have jobs that automatically update the machien packages…could this be the reason they don’t like talking to eachother?
19:43 JoeJulian It *shouldn't* be, but if the problem follows the difference, I would certainly consider that.
19:43 Caveat4U Hmm - OK -
19:43 * Caveat4U just noticed that
19:44 Caveat4U I’m not sure how long the versions have been mismatched
19:44 JoeJulian btw... do you use replication?
19:45 Caveat4U yup
19:45 Caveat4U 3x2
19:46 JoeJulian How do you handle automatic updates? Do you somehow wait for heals to finish between upgrading servers?
19:46 Caveat4U There is an ansible recipe that happens once per day
19:47 * post-factum is waiting for heal to be finished for 5x2 replica with 26T of data right now...
19:47 Caveat4U It is just supposed to upgrade security patches
19:47 Caveat4U But…apparently…it is doing more
19:47 * Caveat4U is deep diving
19:51 robb_nl joined #gluster
20:04 dlambrig_ joined #gluster
20:08 Caveat4U Alright - everything is upgraded @JoeJulian
20:08 Caveat4U So, for the 1 unsynced entry, should I use a heal or a rebalance?
20:08 JoeJulian heal
20:08 JoeJulian rebalance is for rebalancing.
20:09 Caveat4U huh
20:10 Caveat4U [root@gluster01 csterling]# gluster volume heal nmd full
20:10 Caveat4U Launching heal operation to perform full self heal on volume nmd has been unsuccessful on bricks that are down. Please check if all brick processes are running.
20:10 Caveat4U But when I check - all of them have PIDs
20:10 Caveat4U At least it’s a different error
20:11 JoeJulian My guess is that the bricks didn't get restarted during the upgrade.
20:11 mhulsman joined #gluster
20:12 Caveat4U I’ll just run a glusterd restart on them
20:13 Caveat4U Let’s see if they come back up
20:13 netzapper damnit... switching away from striped to distrib+replicate didn't help...
20:14 Hesulan left #gluster
20:15 Caveat4U OK I lied - the brick has a PID, but the NFS server does not
20:19 JoeJulian netzapper: My guess is a memory deadlock. Are the clients also servers?
20:19 netzapper JoeJulian: yes. The compute nodes are also the storage nodes.
20:21 JoeJulian According to everyone at any conference right now, containers are the solution to all your problems... ahem...
20:21 netzapper lol
20:21 netzapper aren't they the solution to all of everybody's problems?
20:21 hagarth netzapper: are you using nfs?
20:22 netzapper hagarth: no
20:22 netzapper literally just ext4 for the system disk, and glusterfs for the distributed volume
20:26 dlambrig_ joined #gluster
20:27 netzapper JoeJulian: is it not supported to have servers also act as clients?
20:27 JoeJulian It is.
20:28 JoeJulian It just meshes nicely with my theory of memory contention.
20:28 netzapper I see.
20:28 JoeJulian If I'm right, running the servers in VMs would solve it.
20:28 JoeJulian Perhaps containers if you know what you're doing with cgroups.
20:29 netzapper man, I have no idea how to achieve that.
20:29 netzapper our compute load has to run on a bare OS because of GPU requirements.
20:29 netzapper so "virtualize all the things" isn't an option.
20:29 JoeJulian Sure, and I wasn't suggesting changing that.
20:29 JoeJulian Just the gluster server.
20:30 netzapper wait, like, literally `glusterd`?
20:31 JoeJulian Sure, you hand off the storage devices to qemu-kvm and run an os and glusterd inside the kvm.
20:31 netzapper yeah.
20:31 JoeJulian Hey, what happened to hagarth.... :)
20:31 JoeJulian s/)/(/
20:31 glusterbot JoeJulian: Error: u'/^(?!s([^A-Za-z0-9\\}\\]\\)\\>\\{\\[\\(\\<\\\\]).+\\1.+[ig]*).*).*/' is not a valid regular expression.
20:31 JoeJulian lol
20:31 netzapper man, I wish Ops had hired their damn HPC admin already...
20:32 * netzapper is product manager in engineering. :(
20:32 JoeJulian I was hoping he would chime in if he thought I was completely out in left field.
20:35 netzapper I like your theory, personally
20:35 netzapper it's a plausible theory for getting stuck in a spinlock on `unlock_page`.
20:37 shyam JoeJulian: hagarth is in a conference (just letting you know)
20:38 JoeJulian I should have gone to Vault.
20:38 netzapper (and now I hear from Science, who implemented detailed tracing, that the compute kernels are stuck inside  a totally bog-standard `write` call to a completely fresh file. No mmap, no fancy ioctls... just `write(2)`.)
20:38 netzapper oh shit!
20:39 netzapper and I have a theory that the files they're writing are smaller than a single page.
20:39 Caveat4U JoeJulian: Alright - I’ve tried everything I can think of - it still says NFS server is offline
20:40 Caveat4U I’m not sure what I’ve done wrong
20:40 Caveat4U But I’ve done something wrong
20:41 jwd joined #gluster
20:41 netzapper Caveat4U: haven't we all.
20:42 ctria joined #gluster
20:42 Caveat4U I used black magic voodoo to store my files and now karma has caught up to me
20:42 Caveat4U Do I sacrifice a goat to appease the gluster gods?
20:42 post-factum one must sacrifice a couple of virgins first, i guess
20:43 post-factum if you have replica 2, then 2 virgins
20:43 Caveat4U hahaha
20:44 JoeJulian @pasteinfo
20:44 glusterbot JoeJulian: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
20:45 Slashman joined #gluster
20:47 Caveat4U I actually just got myself into more trouble - I restarted one of the bricks and gluster won’t start at all now. I’m debugging that
20:49 jobewan joined #gluster
20:51 dlambrig_ joined #gluster
20:59 JoeJulian Odds tend toward a volume hash mismatch. Should say in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
21:00 JoeJulian If you can't find any errors in there, try "glusterd --debug" to run glusterd in the foreground. If I can't find anyting in the log, I usually can in the debug.
21:05 Caveat4U The reason why the brick wasn’t coming back online was because the entry in /etc/fstab was missing…
21:05 dlambrig_ joined #gluster
21:05 Caveat4U So, the bricks are all back in rotation
21:05 JoeJulian Yay
21:05 Caveat4U One of the servers I rolled has NFS server working great
21:05 Caveat4U The other one…not so much still
21:06 mhulsman joined #gluster
21:06 Caveat4U Let me set this to debug and get that pastbin
21:08 Caveat4U http://paste.fedoraproject.org/357971/61186504/
21:08 glusterbot Title: #357971 Fedora Project Pastebin (at paste.fedoraproject.org)
21:09 JoeJulian Yep, can't have both the kernel nfs server and the gluster nfs server at the same time.
21:11 JoeJulian But if you're not mounting the volume via nfs, you could always just set nfs.disable true.
21:11 Caveat4U http://paste.fedoraproject.org/357973/86666146/
21:11 glusterbot Title: #357973 Fedora Project Pastebin (at paste.fedoraproject.org)
21:11 Caveat4U I mean, the brick is online
21:12 Caveat4U But because the NFS server isn’t
21:12 Caveat4U I can’t run the heal
21:12 Caveat4U So I’m trying to get gluster’s NFS service up
21:12 JoeJulian Can't you?
21:13 JoeJulian But if you do want to get the gluster nfs service up, stop the kernel nfs first.
21:13 Caveat4U Stopped the service
21:14 Caveat4U Restarted glusterd
21:14 Caveat4U NFS server still offline
21:14 JoeJulian same 'Could not register with portmap' error?
21:15 Caveat4U Actually…huh...
21:15 Caveat4U It did come back
21:15 Caveat4U And the heal worked
21:15 JoeJulian Excellent.
21:15 Caveat4U There is still an “unsynced entry” being reported
21:15 Caveat4U But the heal ran this time
21:16 Caveat4U Which was great
21:23 Caveat4U Darn it
21:24 Caveat4U http://paste.fedoraproject.org/357980/46118746/
21:24 glusterbot Title: #357980 Fedora Project Pastebin (at paste.fedoraproject.org)
21:25 Caveat4U http://paste.fedoraproject.org/357981/87530146/
21:25 glusterbot Title: #357981 Fedora Project Pastebin (at paste.fedoraproject.org)
21:27 dlambrig_ joined #gluster
21:27 MugginsM joined #gluster
21:31 brandon_ joined #gluster
21:32 Caveat4U I’m going to try to come back to this tomorrow
21:40 russoisraeli joined #gluster
22:11 frakt joined #gluster
22:16 hackman joined #gluster
22:23 dlambrig_ joined #gluster
22:35 chirino joined #gluster
22:42 russoisraeli joined #gluster
22:48 kpease joined #gluster
22:58 jiffin joined #gluster
23:00 dlambrig_ joined #gluster
23:08 johnmilton joined #gluster
23:12 farhorizon joined #gluster
23:24 johnmilton joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary