Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:37 jhyland joined #gluster
00:41 Dave joined #gluster
01:02 jhyland joined #gluster
01:28 ovaistariq joined #gluster
01:35 hackman joined #gluster
01:41 baojg joined #gluster
01:45 baojg joined #gluster
01:47 baojg joined #gluster
01:48 plarsen joined #gluster
02:28 EinstCrazy joined #gluster
02:36 EinstCrazy joined #gluster
02:38 hackman joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:57 baojg joined #gluster
03:01 baojg joined #gluster
03:06 tswartz joined #gluster
03:10 Dave joined #gluster
03:12 baojg joined #gluster
03:16 tswartz joined #gluster
03:25 EinstCrazy joined #gluster
03:29 ovaistariq joined #gluster
04:00 baojg joined #gluster
04:03 Lee1092 joined #gluster
04:06 baojg_ joined #gluster
04:08 m0zes joined #gluster
04:09 cpetersen joined #gluster
04:12 baojg joined #gluster
04:20 jbrooks joined #gluster
04:39 suliba joined #gluster
04:52 tswartz joined #gluster
04:52 EinstCrazy joined #gluster
04:57 Manikandan joined #gluster
04:59 EinstCrazy joined #gluster
05:07 EinstCrazy joined #gluster
05:11 baojg joined #gluster
05:16 EinstCrazy joined #gluster
05:22 baojg_ joined #gluster
05:30 ovaistariq joined #gluster
05:44 plarsen joined #gluster
06:37 Wizek joined #gluster
06:44 nbalacha joined #gluster
06:56 gem joined #gluster
07:05 Wizek joined #gluster
07:30 ovaistariq joined #gluster
07:57 Lee1092 joined #gluster
08:17 petan hello, I got server and node both running same version and I can't mount fs on node, when I execute mount it hang forever
08:18 petan commands like df hang as well until I kill gluster client process
08:44 EinstCrazy joined #gluster
08:45 petan 0-glusterfs: failed to get the 'volume file' from
08:45 petan that is what I see in logs
08:56 baoboa joined #gluster
09:31 ovaistariq joined #gluster
09:34 Peppard joined #gluster
09:36 robb_nl joined #gluster
09:41 baojg joined #gluster
09:49 baojg joined #gluster
09:52 baojg joined #gluster
09:59 post-factum petan: check your firewall first
10:01 petan post-factum: it's turned off on both nodes
10:21 jri joined #gluster
10:32 jri joined #gluster
10:57 jri joined #gluster
11:10 d0nn1e joined #gluster
11:18 mhulsman joined #gluster
11:32 ovaistariq joined #gluster
11:37 post-factum joined #gluster
11:42 nishanth joined #gluster
11:48 deniszh joined #gluster
11:58 deniszh joined #gluster
12:03 DV joined #gluster
12:05 bio_ joined #gluster
12:10 ahino joined #gluster
12:15 jri joined #gluster
12:37 baoboa joined #gluster
12:41 johnmilton joined #gluster
12:59 johnmilton joined #gluster
13:06 DV joined #gluster
13:19 liviudm joined #gluster
13:23 DV joined #gluster
13:24 DV__ joined #gluster
13:33 ovaistariq joined #gluster
13:46 deniszh joined #gluster
14:02 baoboa joined #gluster
14:15 EinstCrazy joined #gluster
14:18 EinstCrazy joined #gluster
14:26 EinstCrazy joined #gluster
14:30 EinstCra_ joined #gluster
14:33 yalu joined #gluster
14:35 ahino joined #gluster
14:47 EinstCrazy joined #gluster
14:53 EinstCrazy joined #gluster
15:01 robb_nl joined #gluster
15:03 EinstCrazy joined #gluster
15:15 jri joined #gluster
15:33 ovaistariq joined #gluster
15:48 hamiller joined #gluster
16:25 plarsen joined #gluster
17:07 cyberbootje joined #gluster
17:12 pilgrim_ joined #gluster
17:19 ovaistariq joined #gluster
17:32 kalzz joined #gluster
17:34 calavera joined #gluster
18:10 mhulsman joined #gluster
18:13 [o__o] joined #gluster
18:16 jri joined #gluster
18:19 nix0ut1aw joined #gluster
18:30 cholcombe joined #gluster
18:42 Wizek joined #gluster
18:45 jhyland joined #gluster
18:51 mhulsman1 joined #gluster
18:51 mhulsman2 joined #gluster
19:02 hackman joined #gluster
19:12 Wizek joined #gluster
19:18 kalzz joined #gluster
19:43 ahino joined #gluster
19:46 unlaudable joined #gluster
20:26 calavera joined #gluster
20:55 pilgrim_ joined #gluster
20:59 Pilgrim_ joined #gluster
21:00 Pilgrim_ Hi all, having issues with Mac clients mounting Gluster volumes by NFS... Mounting via terminal works fine, but in Finder, they only see SOME of the files in each folder. But they see all in terminal with "ls". Seeing other reports online with other NFS servers. Anyone seen this before?
21:10 post-factum Pilgrim_: tried to clean up mac-specific dot files/folders?
21:11 Pilgrim_ Yep, I've removed the .DS_Store files in one folder I'm seeing the issue in and every parent folder back to the root, no love. Also done "sudo dscacheutil -flushcache" and "dsmemberutil flushcache" which apparently have helped others, no joy there either.
21:12 Pilgrim_ Gluster 3.6.3 - hoping to upgrade to 3.7.8 but need to figure out if I can do in-place upgrade or if it definitely requires downtime
21:21 mhulsman joined #gluster
21:21 badone joined #gluster
21:23 ovaistariq joined #gluster
21:40 syadnom is there a way to convert an existing gluster volume with no replicas to replica 2?
21:43 ovaistariq joined #gluster
21:44 ovaistar_ joined #gluster
22:08 DV joined #gluster
22:11 ovaistariq joined #gluster
22:17 JoeJulian Pilgrim_: As long as you have a replicated volume, you can upgrade without down time.
22:17 JoeJulian syadnom: "gluster volume add-brick replica 2 <new brick(s)>"
22:17 syadnom no new bricks..
22:18 syadnom I had 2 bricks, not replicated.
22:18 syadnom lost some data
22:18 JoeJulian If I were doing it, I would create a new volume and move my files to it.
22:18 syadnom I did end up deleting and re-adding.
22:19 syadnom I had the volume in use as an ISO image store for XenServer.
22:19 syadnom Didn't think I needed any redundancy...but it turns out the way XenServer uses the volume caused some lost writes when I rebooted one of the xenserver hosts.
22:27 Pilgrim_ @JoeJulian: Thanks for the info. Do we just stop the service, run the upgrade (in our case, this means adding new EPEL repos to CentOS and issuing a "yum upgrade"), then start the service again?
22:28 JoeJulian Pilgrim_: Yes. Make sure any self-heals have finished before you upgrade the other half of a replica though.
22:36 Pilgrim_ @JoeJulian: Great, thanks. Our cluster has had 48 files stuck in healing mode for the last week. Running a manual heal doesn't fix it. Doesn't seem to be a split-brain issue. How does one normally resolve this?
22:39 post-factum @heal
22:39 glusterbot post-factum: I do not know about 'heal', but I do know about these similar topics: 'heal-failed', 'targeted self heal'
22:39 post-factum @heal-failed
22:39 glusterbot post-factum: From anecdotal evidence, it looks like heal failed is where you have clean @extended attributes but the replicas differ in some other way, maybe size, maybe owner, group, etc.
22:40 post-factum oh thanks
22:40 post-factum @targeted self heal
22:40 glusterbot post-factum: https://web.archive.org/web/20130314122636/htt​p://community.gluster.org/a/howto-targeted-sel​f-heal-repairing-less-than-the-whole-volume/
22:42 Pilgrim_ Haha thanks
22:45 post-factum excuse that bot
22:48 nix0ut1aw joined #gluster
22:49 nix0ut1aw joined #gluster
22:51 nix0ut1aw joined #gluster
22:52 misc mhh, why does the bot point to archives.org and not the regular website ?
22:59 JoeJulian Because that was the only place that information existed when that factoids was last modified.
23:01 misc so it should be added back to the website I guess ?
23:01 JoeJulian Or forgotten if it's no longer relevant.
23:13 johnmilton joined #gluster
23:13 calavera joined #gluster
23:41 jhyland joined #gluster
23:46 jockek joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary