Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 RameshN joined #gluster
00:16 David_H_Smith joined #gluster
00:17 David_H_Smith joined #gluster
00:24 ahino joined #gluster
00:24 alghost moka111: HI, I am interested in your situation. could I get the progress for solving problem please?
00:26 hagarth joined #gluster
00:44 luizcpg joined #gluster
00:47 Chaot_s joined #gluster
01:02 haomaiwang joined #gluster
01:24 Lee1092 joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:58 haomaiwang joined #gluster
02:00 Pupeno joined #gluster
02:01 haomaiwang joined #gluster
02:27 plarsen joined #gluster
02:38 B21956 joined #gluster
03:01 haomaiwang joined #gluster
03:06 jiffin joined #gluster
03:20 jbrooks joined #gluster
03:22 poornimag joined #gluster
03:30 natarej joined #gluster
03:45 hagarth joined #gluster
03:49 itisravi joined #gluster
03:50 atinm joined #gluster
03:53 d0nn1e joined #gluster
03:53 nishanth joined #gluster
03:53 john51 joined #gluster
03:54 nbalacha joined #gluster
03:56 kotreshhr joined #gluster
03:57 kotreshhr left #gluster
03:59 kdhananjay joined #gluster
04:01 haomaiwang joined #gluster
04:02 Pupeno joined #gluster
04:07 hagarth joined #gluster
04:09 suliba joined #gluster
04:19 RameshN joined #gluster
04:32 gowtham joined #gluster
04:33 aspandey joined #gluster
04:40 sakshi joined #gluster
04:44 prasanth joined #gluster
04:50 jbrooks joined #gluster
04:59 jiffin joined #gluster
05:01 haomaiwang joined #gluster
05:02 GeniusOfTime joined #gluster
05:02 karthik___ joined #gluster
05:02 GeniusOfTime Any ganesha experts here? should the nfs-grace resource be started or stopped during normal operation. Does it get triggered to start by failover, or should it be running to monitor failover?
05:07 ndarshan joined #gluster
05:09 Apeksha joined #gluster
05:11 overclk joined #gluster
05:13 aravindavk joined #gluster
05:13 gem joined #gluster
05:28 hgowtham joined #gluster
05:31 F2Knight joined #gluster
05:43 rafi joined #gluster
05:44 Bhaskarakiran joined #gluster
05:48 Saravanakmr joined #gluster
05:52 pur__ joined #gluster
05:53 Manikandan joined #gluster
05:56 rastar joined #gluster
06:01 haomaiwang joined #gluster
06:01 kotreshhr joined #gluster
06:03 Wizek joined #gluster
06:06 ashiq joined #gluster
06:09 karnan joined #gluster
06:12 atalur joined #gluster
06:16 Wizek joined #gluster
06:18 suliba joined #gluster
06:19 ppai joined #gluster
06:32 jtux joined #gluster
06:34 spalai joined #gluster
06:35 Wizek joined #gluster
06:35 7GHAA86BE joined #gluster
06:49 ramky joined #gluster
07:01 haomaiwang joined #gluster
07:04 deniszh joined #gluster
07:08 jri joined #gluster
07:10 kdhananjay joined #gluster
07:17 RameshN joined #gluster
07:24 ivan_rossi joined #gluster
07:26 ivan_rossi left #gluster
07:33 jri joined #gluster
07:44 spalai joined #gluster
07:45 anil_ joined #gluster
07:50 aspandey joined #gluster
08:01 haomaiwang joined #gluster
08:07 zzal joined #gluster
08:11 kovshenin joined #gluster
08:17 hackman joined #gluster
08:33 [Enrico] joined #gluster
08:36 lh_ joined #gluster
08:36 Gugge_ joined #gluster
08:36 lezo_ joined #gluster
08:36 karthik___ joined #gluster
08:37 ic0n_ joined #gluster
08:38 darshan joined #gluster
08:39 ackjewt joined #gluster
08:39 Saravanakmr joined #gluster
08:39 shortdudey123_ joined #gluster
08:39 uebera|| joined #gluster
08:39 uebera|| joined #gluster
08:39 Saravanakmr joined #gluster
08:39 Acinonyx joined #gluster
08:40 Larsen_ joined #gluster
08:40 [Enrico] joined #gluster
08:43 jakobs joined #gluster
08:46 ju5t joined #gluster
08:49 atalur joined #gluster
08:54 Pupeno joined #gluster
08:55 Slashman joined #gluster
08:57 scubacuda joined #gluster
08:58 nehar joined #gluster
09:00 sakshi joined #gluster
09:01 haomaiwang joined #gluster
09:06 Chr1st1an joined #gluster
09:15 MikeLupe11 joined #gluster
09:18 AppStore joined #gluster
09:23 Guest78611 joined #gluster
09:31 gem joined #gluster
09:53 johnmilton joined #gluster
09:57 klaas_ joined #gluster
09:58 ndevos GeniusOfTime: http://gluster.readthedocs.​io/en/latest/presentations/ has a pointer to http://www.slideshare.net/GlusterCommunity/i​ntroduction-to-highlyavailablenfsserveronsca​leoutstoragesystemsbasedonglusterfssdc2015 where it might be explained
09:58 glusterbot Title: Presentations - Gluster Docs (at gluster.readthedocs.io)
09:58 Wizek_ joined #gluster
09:59 kenansul- joined #gluster
10:01 haomaiwang joined #gluster
10:03 Mmike joined #gluster
10:06 Chinorro joined #gluster
10:07 tdasilva joined #gluster
10:11 social__ joined #gluster
10:14 samikshan joined #gluster
10:17 scubacuda joined #gluster
10:20 gowtham joined #gluster
10:25 Chinorro joined #gluster
10:26 tdasilva joined #gluster
10:30 Chr1st1an joined #gluster
10:42 aravindavk joined #gluster
10:43 AppStore joined #gluster
10:43 atinm joined #gluster
10:46 natarej joined #gluster
10:49 Slashman joined #gluster
10:49 jakobs joined #gluster
10:49 kovshenin joined #gluster
10:49 owlbot joined #gluster
10:51 lkoranda joined #gluster
10:51 ira_ joined #gluster
11:01 haomaiwang joined #gluster
11:07 aravindavk joined #gluster
11:07 arcolife joined #gluster
11:15 hgowtham joined #gluster
11:15 [Enrico] joined #gluster
11:15 jiffin joined #gluster
11:19 gem joined #gluster
11:19 hackman joined #gluster
11:24 jiffin joined #gluster
11:25 hackman joined #gluster
11:26 karthik___ joined #gluster
11:29 johnmilton joined #gluster
11:31 hackman joined #gluster
11:32 atinm joined #gluster
11:37 hackman joined #gluster
11:41 hackman joined #gluster
11:48 Javezim Anyone know of any guidelines when looking at Arbiter Volumes? Ie. How much storage you need to assign a arbiter volume compared to how much data storage there is
11:52 jiffin itisravi: ^^
11:54 tom[] joined #gluster
11:58 jri joined #gluster
11:59 [Enrico] is there any way to prevent the FS where one brick is (XFS) to be mounted on two different servers at the same time (like when you have it for failover purpose)
11:59 [Enrico] ?
12:08 jri_ joined #gluster
12:09 guhcampos joined #gluster
12:10 Pupeno joined #gluster
12:11 itisravi Javezim: post-factum has dones some tests and arrived at 1KB* no.of files.
12:15 atinm joined #gluster
12:19 ppai joined #gluster
12:21 itisravi Javezim: https://gist.github.com/pf​actum/e8265ca07f7b19f30bb3 is the link to the tests he did.
12:21 glusterbot Title: glfs_arbiter.md · GitHub (at gist.github.com)
12:27 Gnomethrower joined #gluster
12:32 Pupeno joined #gluster
12:36 post-factum aye
12:36 julim joined #gluster
12:42 chirino joined #gluster
12:43 ashiq kkeithley++, thanks
12:43 glusterbot ashiq: kkeithley's karma is now 26
12:45 Pupeno joined #gluster
12:47 jri joined #gluster
12:50 aravindavk joined #gluster
13:07 Lee1092 joined #gluster
13:10 gowtham joined #gluster
13:10 luizcpg joined #gluster
13:21 unclemarc joined #gluster
13:31 shyam joined #gluster
13:40 guhcampos joined #gluster
13:44 nbalacha joined #gluster
13:47 plarsen joined #gluster
13:53 skylar joined #gluster
14:09 b_bezak joined #gluster
14:21 shaunm joined #gluster
14:22 rafi1 joined #gluster
14:28 itisravi joined #gluster
14:37 kpease joined #gluster
14:39 Mmike joined #gluster
14:43 wushudoin joined #gluster
14:45 wushudoin joined #gluster
14:49 ivan_rossi joined #gluster
14:52 Pupeno joined #gluster
14:55 cholcombe joined #gluster
14:58 ajneil joined #gluster
14:58 ajneil /msg NickServ SETPASS ajneil wrhpszbguxtm Rec1luse_1617!
14:59 ajneil opps
15:00 ajneil I have an issue with failed snapshot removal on a 3.7.11 replica 3 volume
15:01 ajneil snapshot removal commit failed on one node
15:01 post-factum ajneil: do not forget to change all you passwords now
15:03 ajneil so now the snaps list for the volume is inconsistent across the cluster
15:06 ajneil anyone have any insight about how to recover from this?
15:12 shyam joined #gluster
15:15 Pupeno joined #gluster
15:22 ajneil joined #gluster
15:28 rafi joined #gluster
15:31 BitByteNybble110 joined #gluster
15:42 David_H_Smith joined #gluster
15:42 David_H_Smith joined #gluster
15:52 Pupeno joined #gluster
16:02 F2Knight joined #gluster
16:05 guhcampos joined #gluster
16:15 shyam joined #gluster
16:20 skylar joined #gluster
16:43 haomaiwang joined #gluster
16:45 hackman joined #gluster
16:48 jiffin joined #gluster
16:57 Pupeno joined #gluster
17:01 haomaiwang joined #gluster
17:02 ivan_rossi left #gluster
17:08 Pupeno joined #gluster
17:16 chirino joined #gluster
17:43 kotreshhr joined #gluster
17:48 adamaN joined #gluster
18:03 aravindavk joined #gluster
18:14 kpease joined #gluster
18:16 kpease joined #gluster
18:17 kotreshhr left #gluster
18:18 Manikandan joined #gluster
18:19 Pupeno joined #gluster
18:23 kovshenin joined #gluster
18:25 PatNarciso I wanted to revisit the 'samba on fuse on tier' discussion from yesterday.  I was curious about the how severe the bugs encountered maybe.  @JoeJuliann @ira
18:31 Pupeno joined #gluster
18:37 haomaiwang joined #gluster
18:41 squizzi joined #gluster
18:54 d0nn1e joined #gluster
19:01 theron joined #gluster
19:23 kpease joined #gluster
19:27 kpease joined #gluster
19:38 deniszh joined #gluster
19:40 deniszh joined #gluster
19:50 rwheeler joined #gluster
20:02 deniszh joined #gluster
20:15 Pupeno joined #gluster
20:20 johnmilton joined #gluster
20:27 Philambdo joined #gluster
20:31 deniszh joined #gluster
20:32 Pupeno joined #gluster
20:35 Pupeno joined #gluster
20:37 johnmilton joined #gluster
20:48 cliluw joined #gluster
20:57 kovshenin joined #gluster
21:01 JoeJulian PatNarciso: There's only two bugs in bugzilla regarding samba+tier (and multiple copies of those bugs to facilitate backporting): bug 1324439 and bug 1330567
21:01 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1324439 high, unspecified, ---, hgowtham, ON_QA , SAMBA+TIER : Wrong message display.On detach tier success the message reflects Tier command failed.
21:01 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1330567 high, unspecified, ---, rkavunga, ON_QA , SAMBA+TIER : File size is not getting updated when created on windows samba share mount
21:02 JoeJulian I still wouldn't use fuse, personally.
21:02 JoeJulian ... for samba.
21:05 PatNarciso I currently use samba on fuse for providing storage for video editing.   my previous attempt (about a year+ ago) at a non-fuse setup, would sometimes take out glusterd for the volume.
21:05 PatNarciso happily I can say, samba+fuse has been very stable.
21:08 PatNarciso like any video editing nas, random io is high.  and on a distributed gluster setup backed by raid6 rotating bricks, response time is getting slow.    I wanted to explore adding an ssd hot tier.
21:12 PatNarciso I could only imagine how 1330567 would upset Adobe clients.
21:12 JoeJulian Are you replicating?
21:12 PatNarciso No.
21:14 JoeJulian Have you considered putting the filesystem journal on ssd but leaving the data on rust?
21:17 squizzi joined #gluster
21:18 PatNarciso Yes.  I become cautious when modifying a setup beyond whats considered 'common'.
21:21 JoeJulian Well this is ephemeral data anyway or you would use replication, so the risk only of change loss in the event of an ssd failure which could be mitigated with raid1. You don't need much space for the filesystem journal, so it's not a big expense for the advantage.
21:27 PatNarciso I like the idea -- however our current 2 storage-nodes are based on commodity hardware: 8 rust disks; raid6.  to implement the external journal, i'd have to remove 2 disks from the array to make room for the raid1 ssd's.
21:28 JoeJulian Mmm, that wouldn't be quite so dense.
21:31 PatNarciso if the tier-bugs wern't a concern-- I was considering adding a 3rd server to the trusted storage-pool: 2xssd raid1, and having this machine be the network samba service.  (acceptable spof imo).
21:31 glusterbot PatNarciso: concern's karma is now -1
21:32 PatNarciso hmm.
21:34 JoeJulian Well, technically they're all single points of failure - just not total failure.
21:35 PatNarciso true.  brown-outs are always interesting.
21:39 PatNarciso I'm considering this modification: setting up the 3rd box for network samba service. 2ssd raid1.  add this as a non-uniform brick to the trusted pool, and enable leveraging cluster.nufa.
21:40 JoeJulian Ok, if you lose journal you lose changes that were not synced to the filesystem. The MTBF of ssd is like 170 years. I'd accept that risk if it was me.
21:41 JoeJulian So 5-disk raid6 + ssd journal.
21:42 JoeJulian You could still add that 3rd server and be ahead storage-wise.
21:42 caitnop joined #gluster
21:45 PatNarciso JoeJulian, do you have any benchmarks to share?
21:47 JoeJulian Oh, and since journals use a relatively small percentage of the disk, wear should most certainly exceed depreciation.
21:47 JoeJulian I don't. I've done that with ceph, but not (yet) with gluster.
21:49 JoeJulian Even then, I'd have been doing it with vm images, not video files - though I expect they're probably pretty similar i/o footprints.
21:52 PatNarciso imo: video editing can be worse.  video editors run read and writes, with little optimization.  vm's: there are i/o considerations at almost every layer.
21:52 JoeJulian Good point
21:53 Pupeno joined #gluster
21:53 Pupeno joined #gluster
21:56 PatNarciso if I were to remove some rust by making the 8-array a 7-array, and slap in some ssd... i'd still have to resize the raid; and resize(shrink) xfs remains impossible.  hmm.  I'd have to benchmark this before implementing the changes. as the changes would be a mission.
22:00 kovshenin joined #gluster
22:03 shyam joined #gluster
22:03 PatNarciso JoeJulian, related to what we were discussing: external journal iozone http://raid6.com.au/posts/​fs_ext4_external_journal/
22:03 glusterbot Title: ext4: using external journal to optimise performance (at raid6.com.au)
22:18 JoeJulian Yeah, if it was easy everyone would be doing it. :)
22:18 JoeJulian Neat... looks like AT&T might be having a nation-wide outage.
22:27 Pupeno joined #gluster
22:29 haomaiwang joined #gluster
22:52 cvstealth joined #gluster
23:08 shyam joined #gluster
23:08 theron joined #gluster
23:16 theron joined #gluster
23:35 shyam joined #gluster
23:46 foster joined #gluster
23:52 F2Knight joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary