Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:25 dlambrig_ joined #gluster
00:35 marlinc joined #gluster
00:50 ctria joined #gluster
01:01 haomaiwang joined #gluster
01:18 russoisraeli joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:01 haomaiwang joined #gluster
02:17 betheynyx joined #gluster
02:18 mowntan joined #gluster
02:23 johnmilton joined #gluster
03:01 haomaiwang joined #gluster
03:11 natarej joined #gluster
03:34 skoduri joined #gluster
03:40 a2 joined #gluster
03:56 shyam left #gluster
04:00 atalur_ joined #gluster
04:01 haomaiwang joined #gluster
04:22 atalur_ joined #gluster
04:22 atalur joined #gluster
04:25 RameshN_ joined #gluster
04:47 baoboa joined #gluster
04:48 spalai joined #gluster
04:48 spalai left #gluster
04:53 skoduri joined #gluster
05:01 haomaiwang joined #gluster
05:12 samsaffron___ joined #gluster
05:20 Lee1092 joined #gluster
05:26 nishanth joined #gluster
05:31 F2Knight joined #gluster
06:01 haomaiwang joined #gluster
06:20 shubhendu joined #gluster
06:23 shubhendu joined #gluster
06:33 BitByteNybble110 joined #gluster
06:35 poornimag joined #gluster
07:01 haomaiwang joined #gluster
07:24 spalai joined #gluster
07:36 spalai joined #gluster
07:44 Wizek joined #gluster
08:01 overclk joined #gluster
08:01 haomaiwang joined #gluster
08:15 Wizek joined #gluster
08:16 ahino joined #gluster
08:30 kovshenin joined #gluster
08:33 level7 joined #gluster
08:42 level7_ joined #gluster
08:46 betheynyx joined #gluster
08:48 hackman joined #gluster
09:01 haomaiwang joined #gluster
09:24 natarej joined #gluster
09:31 MikeLupe joined #gluster
09:34 shirwah joined #gluster
09:44 shirwah Hi All, I'm running into a Glusterfs (3.5.2-1) high memory usage problem in my environment. Glusterfs nodes has 8GB of ram after a while glusterfs eats up all available memory.
09:46 shirwah As suggested in Glusterfs documentation, this command will return glusterfs memory usage to normal: echo 2 > /proc/sys/vm/drop_caches
09:47 shirwah Has anyone experience similar issue and does anyone know how to fix this?
09:48 spalai left #gluster
09:54 kovshenin joined #gluster
10:01 haomaiwang joined #gluster
10:02 MikeLupe Can you give me some hints about uninstalling ganesha on oVirt/Gluster? I did "yum install nfs-ganesha-xfs glusterfs-ganesha" but there remain some errors in oVirt about "detected conflikt in hook start-POST-31ganesha-start.sh".
10:03 MikeLupe I could simply remove the ClusterHooks. But it shows me, that there is some stuff that remained after uninstalling.
10:11 post-factum shirwah: lots of memory leaks were fixed in 3.7 branch, so i'd recommend you to upgrade
10:11 post-factum shirwah: also, caches are not memleaks in general, if drop_caches frees that memory, it is available for other apps too
10:11 post-factum shirwah: see: http://www.linuxatemyram.com/
10:12 glusterbot Title: Help! Linux ate my RAM! (at www.linuxatemyram.com)
10:12 spalai1 joined #gluster
10:15 DV joined #gluster
10:16 shirwah post-factun: Thanks for the reply.
10:20 spalai left #gluster
10:21 MikeLupe I simply resolved conflicts/removed in ovirt the cluster hooks and it's ok
10:23 spalai1 joined #gluster
10:27 atalur joined #gluster
10:53 level7 joined #gluster
11:01 haomaiwang joined #gluster
11:14 baoboa joined #gluster
11:26 russoisraeli joined #gluster
11:28 johnmilton joined #gluster
11:44 karnan joined #gluster
11:48 johnmilton joined #gluster
11:48 spalai|lunch left #gluster
11:55 kbyrne joined #gluster
11:56 ghenry joined #gluster
12:00 hchiramm joined #gluster
12:01 haomaiwang joined #gluster
12:02 swebb joined #gluster
12:20 EinstCrazy joined #gluster
12:28 ahino joined #gluster
12:38 DV joined #gluster
12:53 level7 joined #gluster
13:01 haomaiwang joined #gluster
13:27 ahino joined #gluster
13:39 MikeLupe joined #gluster
13:44 Gnomethrower joined #gluster
13:44 betheynyx joined #gluster
14:02 haomaiwang joined #gluster
14:19 poornimag joined #gluster
14:50 Wojtek joined #gluster
14:53 Wojtek https://paste.fedoraproject.org/361531/11436614/
14:53 glusterbot Title: #361531 Fedora Project Pastebin (at paste.fedoraproject.org)
14:53 Wojtek I have cluster.entry-self-heal: off, but my gv0 logs say theres some healing in progress
14:54 poornimag Wojtek, healing can be done by either self heal deamon or from the mount point itself,
14:55 poornimag Wojtek, cluster.entry-self-heal disables only the healing from self heal deamon
14:56 Wojtek how can I disable the mount point healing?
15:01 haomaiwang joined #gluster
15:05 poornimag Wojtek, sorry i said the opposite, cluster.entry-self-heal off disables healing from mount point and gluster volume set <volname>
15:05 poornimag self-heal-daemon off disables self heal deamon
15:07 crashmag joined #gluster
15:07 Wojtek allright, so according to my settings it should be off
15:07 Wojtek Do you have an idea why I still see entry self heal deleting assets in the logs?
15:08 atalur_ joined #gluster
15:12 level7 joined #gluster
15:19 poornimag Wojtek, if cluster.entry-self-heal is disabled it shouldn't really log these messages
15:20 poornimag Wojtek, One thing you could try is to check if it is really disabled by taking statedump of the mount process, and grep for "entry_self_heal" in statedump file
15:20 poornimag http://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/
15:20 glusterbot Title: Statedump - Gluster Docs (at gluster.readthedocs.io)
15:24 Wojtek Thanks poornimag, I will look into the statedump
15:25 jiffin joined #gluster
15:30 karnan joined #gluster
15:32 bluenemo joined #gluster
15:40 nehar joined #gluster
15:43 Wojtek https://paste.fedoraproject.org/361534/14621173/
15:43 glusterbot Title: #361534 Fedora Project Pastebin (at paste.fedoraproject.org)
15:43 Wojtek dump says all healing options are activated
15:48 Wojtek https://paste.fedoraproject.org/361535/14621177/
15:48 glusterbot Title: #361535 Fedora Project Pastebin (at paste.fedoraproject.org)
15:49 Wojtek another dump has the heal correctly set to off
16:00 JesperA joined #gluster
16:01 haomaiwang joined #gluster
16:11 hagarth joined #gluster
16:29 Iodun joined #gluster
16:40 Iodun joined #gluster
16:50 jiffin1 joined #gluster
16:53 kovshenin joined #gluster
16:53 jiffin1 joined #gluster
17:01 haomaiwang joined #gluster
17:04 robb_nl joined #gluster
17:19 ahino joined #gluster
17:22 MikeLupe Could this be a sign of split-brain?
17:22 MikeLupe Could not associate brick SRV02:/gluster/engine/brick1' of volume '58613966-43b3-4275-aed9-7293eb0999a1' with correct network as no gluster network found in cluster
17:23 MikeLupe All gluster nodes up and running and reachable from each other
17:33 JoeJulian MikeLupe: Nope, that's not split-brain. That's something else entirely that I've never seen before. Cool.
17:36 mhulsman joined #gluster
17:36 JoeJulian MikeLupe: where is that coming from? The word 'associate' doesn't appear in any log message.
17:42 jiffin joined #gluster
17:43 JesperA- joined #gluster
17:47 rastar joined #gluster
17:48 MikeLupe well - from ovirt engine log ;)
17:49 MikeLupe I f***ed up again when I restarted the network on my 3 hosts......
17:50 MikeLupe all gluster status seem ok
17:50 MikeLupe peer & volume
18:01 haomaiwang joined #gluster
18:08 daMaestro joined #gluster
18:12 ctria joined #gluster
18:22 daMaestro joined #gluster
18:26 F2Knight joined #gluster
18:48 mowntan joined #gluster
18:53 ctria joined #gluster
18:53 betheynyx joined #gluster
19:01 haomaiwang joined #gluster
19:17 daMaestro joined #gluster
19:20 kovsheni_ joined #gluster
19:29 kovshenin joined #gluster
19:49 daMaestro joined #gluster
19:53 kovsheni_ joined #gluster
20:01 haomaiwang joined #gluster
20:11 daMaestro joined #gluster
20:14 kovshenin joined #gluster
20:56 kovshenin joined #gluster
21:01 haomaiwang joined #gluster
21:08 kovsheni_ joined #gluster
21:13 kovshenin joined #gluster
21:15 kovshenin joined #gluster
21:37 john51 joined #gluster
22:01 haomaiwang joined #gluster
22:10 daMaestro joined #gluster
22:37 Mmike joined #gluster
22:38 Mmike Hello. Can I change the volume type from distributed to replica?
23:01 haomaiwang joined #gluster
23:04 javi404 joined #gluster
23:09 gdi2k joined #gluster
23:11 gdi2k hi, I'm trying to use gluster for sharing desktop user home directories among two terminal servers (LTSP). Everything seems to work fine, but when users log in, a lot of desktop stuff is broken (no taskbars, background image not loading, no window borders etc.). When I move the home directory back to a regular partition, things work fine again. I use usermod -m to move home directories for testing. What would cause such issues on gluster?
23:12 gdi2k desktop environment in this case is XFCE, in case it matters.
23:25 F2Knight joined #gluster
23:35 russoisraeli hello guys. I have a corrupted file on the gluster fs. It shows a question mark for the inode, and the other info. I cannot delete it - I/O error. On the actual brick storage the file is fine. How can I fix/delete this file?
23:44 JoeJulian russoisraeli: check your client log. The reason for the fault is probably in there. Remount probably is the quick fix.
23:45 JoeJulian gdi2k: selinux?
23:45 JoeJulian Mmike: yep, just add enough bricks to replicate your volume.
23:48 primehaxor joined #gluster
23:51 F2Knight joined #gluster
23:57 primehax_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary