Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-11-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 gem joined #gluster
00:23 moss joined #gluster
00:23 moss joined #gluster
00:26 bowhunter joined #gluster
00:45 plarsen joined #gluster
00:48 moss joined #gluster
00:48 moss joined #gluster
00:50 zat joined #gluster
00:51 anoopcs joined #gluster
01:17 luizcpg joined #gluster
01:19 luizcpg left #gluster
01:22 luizcpg joined #gluster
01:24 moss joined #gluster
01:24 moss joined #gluster
01:26 moss joined #gluster
01:44 farhorizon joined #gluster
01:51 shdeng joined #gluster
01:58 moss joined #gluster
02:00 victori joined #gluster
02:11 Gambit15 joined #gluster
02:14 aj__ joined #gluster
02:19 hchiramm joined #gluster
02:31 haomaiwang joined #gluster
03:09 ankitraj joined #gluster
03:15 hagarth joined #gluster
03:15 vbellur joined #gluster
03:16 prth joined #gluster
03:29 ahino joined #gluster
03:31 Jacob843 joined #gluster
03:35 ppai joined #gluster
03:35 blu__ joined #gluster
03:35 satya4ever joined #gluster
03:44 kramdoss_ joined #gluster
03:46 nbalacha joined #gluster
03:50 magrawal joined #gluster
03:50 shubhendu joined #gluster
03:52 moss joined #gluster
03:57 nishanth joined #gluster
04:08 atinm joined #gluster
04:09 sanoj joined #gluster
04:10 ndarshan joined #gluster
04:14 shubhendu joined #gluster
04:17 jiffin joined #gluster
04:20 fang64 joined #gluster
04:22 prth joined #gluster
04:25 haomaiwang joined #gluster
04:28 poornima joined #gluster
04:31 buvanesh_kumar joined #gluster
04:35 karthik_us joined #gluster
04:35 luizcpg joined #gluster
04:40 itisravi joined #gluster
04:51 luizcpg joined #gluster
04:52 ashiq joined #gluster
04:57 k4n0 joined #gluster
05:07 aravindavk joined #gluster
05:07 mb_ joined #gluster
05:07 buvanesh_kumar_ joined #gluster
05:10 prasanth joined #gluster
05:11 haomaiwang joined #gluster
05:14 kramdoss_ joined #gluster
05:22 mb_ joined #gluster
05:22 prth joined #gluster
05:24 kotreshhr joined #gluster
05:29 Jacob843 joined #gluster
05:31 Qit joined #gluster
05:31 Qit test
05:33 f0rpaxe joined #gluster
05:38 haomaiwang joined #gluster
05:43 kramdoss_ joined #gluster
05:47 panina joined #gluster
05:51 Saravanakmr joined #gluster
05:53 haomaiwang joined #gluster
06:03 satya4ever joined #gluster
06:03 Bhaskarakiran joined #gluster
06:05 kdhananjay joined #gluster
06:06 hgowtham joined #gluster
06:07 apandey joined #gluster
06:07 rafi joined #gluster
06:08 skoduri joined #gluster
06:11 Muthu joined #gluster
06:25 prasanth joined #gluster
06:29 hchiramm joined #gluster
06:31 karnan joined #gluster
06:37 msvbhat joined #gluster
06:40 mb_ joined #gluster
06:42 jtux joined #gluster
06:59 masber joined #gluster
07:04 mhulsman joined #gluster
07:05 jkroon joined #gluster
07:17 mhulsman joined #gluster
07:31 rouven joined #gluster
07:39 [diablo] joined #gluster
07:49 karnan joined #gluster
07:56 gem joined #gluster
07:59 ashiq joined #gluster
07:59 buvanesh_kumar joined #gluster
08:00 titansmc left #gluster
08:02 mhulsman1 joined #gluster
08:04 hackman joined #gluster
08:05 mhulsman joined #gluster
08:12 [diablo] joined #gluster
08:35 smohan joined #gluster
08:43 riyas joined #gluster
08:44 mhulsman joined #gluster
08:51 rastar joined #gluster
08:58 rouven joined #gluster
09:01 flying joined #gluster
09:03 arc0 joined #gluster
09:07 aj__ joined #gluster
09:13 nix0ut1aw joined #gluster
09:15 devyani7 joined #gluster
09:17 buvanesh_kumar_ joined #gluster
09:26 karnan joined #gluster
09:36 msvbhat joined #gluster
09:38 Wizek_ joined #gluster
09:43 [diablo] joined #gluster
09:45 HitexLT joined #gluster
10:06 karnan joined #gluster
10:16 witsches joined #gluster
10:30 zat joined #gluster
10:39 witsches Hello everyone
10:40 witsches I am planning to set up a rather special scenario and anted to ask if you guys could give me some advice regarding the architecture
10:41 witsches * on AWS
10:41 witsches * 1 TB Storage
10:42 witsches * should be capable of storing 500Mio smal files, of max 3kB
10:42 witsches * distributed volume
10:43 witsches as I am on AWS, I can use as many nodes as needed, no limitations here
10:45 witsches * also I expect 300Mio writes/day
10:47 HitexLT joined #gluster
10:49 msvbhat joined #gluster
10:53 witsches joined #gluster
11:00 cloph no typo? way more than 3000 writes/s, probably not even distributed evenly across the day?
11:00 gem joined #gluster
11:01 witsches no typo
11:01 witsches yes, call me nuts
11:01 witsches :D
11:02 witsches but rather evenly distributed
11:02 misc you can use /nick to change your name if you want to be called "nuts"
11:02 cloph :-)
11:03 haomaiwang joined #gluster
11:04 cloph IIRC maximum you can have in amazon ec2 is 3000 IOPS.. why considering gluster? What would gluster's job be in this setup? distributed filesystems are not known for their small-file-performance :-)
11:04 skoduri_ joined #gluster
11:08 witsches my ideas was to replace our current s3  usage with something self-built. If I create a distributed Volume with lots of bricks, It should be possible to reach higher IOPS, no?
11:10 cloph there's network latency involved though.
11:11 * cloph doesn't have any experience whatsoever at that scale though
11:11 misc and metadata since gluster is backed by posix, if I am not wrong
11:12 witsches but i am willing to benchmark it - so let me put a concrete question:
11:13 witsches What would you recommend regarding
11:13 witsches a) numer of nodes
11:13 witsches b) number of bricks
11:13 witsches there is also a relatively new blogpost that addresses the issue btw:
11:13 witsches http://blog.gluster.org/2016/10/gluster-tiering-and-small-file-performance/
11:13 glusterbot Title: Gluster tiering and small file performance | Gluster Community Website (at blog.gluster.org)
11:14 cloph and /me was wrong re the 3000 IOPS, that is the sustained burst perf of the general purpose ones - provisioned IOPS ssd go up to 20000 IOPS - so with those you should be good
11:15 cloph witsches: again not clear why you would want gluster and not just plain volume, after all amazon already is in the range of five nines...
11:17 cloph depending on whether those writes are on the same set of files or completely random (500mio small files and 300mio writes is quite strange ratio to me :-))
11:19 cloph but general rule with gluster: use at least three nodes (for maintaining quorum)
11:20 witsches I will do benchmarks with plain volumes, too
11:20 panina joined #gluster
11:22 witsches I'm evaluating all possibilities. And I have concerns with using nfs
11:23 witsches my hope is that the native cluster client performs better... But that needs to be benchmarked
11:25 cloph when using gluster, make sure inode size is large enough to store the extended attributes
11:26 cloph you still didn't answer though what you'd expect gluster to do in your setup.
11:26 kotreshhr left #gluster
11:30 witsches well, serve files. To multiple clients connected via network
11:31 witsches probably writes will come from only few clients
11:31 luizcpg joined #gluster
11:32 cloph ok, so you don't need gluster to provide failure tolerance/redundancy, just  as an aleternative way to mount as opposed to using nfs
11:32 witsches yes. Thats why I want to use distributed mode. Failover is covered by amazon EBS (HA Block device)
11:33 witsches AND the whole thing needs to be able to grow
11:33 witsches which is also a plus for gluster
11:34 mhulsman joined #gluster
11:37 cloph as for ec2:  only the larger instance types do support the higher io-perfs, so you need large instances anyway, so I'd say try with just one node, and with a threeway distributed one.
11:38 cloph growing would be easy even with one instance - just throw in another disk and enlarge a raid0/lvm
11:39 witsches ok, will try and post results here - thanks!
11:43 cloph also keep the cloud monitoring data re ebs performance metrics please :-)
11:44 witsches y :)
11:47 prth joined #gluster
11:47 witsches joined #gluster
11:56 witsches joined #gluster
11:57 Peppard joined #gluster
12:03 ira joined #gluster
12:04 wadeholler joined #gluster
12:05 witsches joined #gluster
12:05 tomaz__ joined #gluster
12:05 prth joined #gluster
12:08 B21956 joined #gluster
12:22 tomaz__ joined #gluster
12:24 nishanth joined #gluster
12:36 prth joined #gluster
12:45 luizcpg_ joined #gluster
12:52 johnmilton joined #gluster
12:52 jiffin1 joined #gluster
12:56 titansmc joined #gluster
12:57 haomaiwang joined #gluster
12:57 titansmc Hi guys, is anyone using nagios-check-gluster ? Could you explain me how to use it? I can't manage  to bring it up
12:58 titansmc opsld03:/gluster/data # gluster volume status volume1
12:58 titansmc Status of volume: volume1
12:58 titansmc Gluster process                             TCP Port  RDMA Port  Online  Pid
12:58 titansmc ------------------------------------------------------------------------------
12:58 titansmc Brick gluster1:/mnt/.bricks                 49153     0          Y       19598
12:58 titansmc Brick gluster2:/mnt/.bricks                 49153     0          Y       3029
12:58 titansmc NFS Server on localhost                     2049      0          Y       19581
12:58 glusterbot titansmc: ----------------------------------------------------------------------------'s karma is now -19
12:58 titansmc Self-heal Daemon on localhost               N/A       N/A        Y       19591
12:58 titansmc NFS Server on opsld04.bsc.es                N/A       N/A        N       N/A
12:58 titansmc Self-heal Daemon on opsld04.bsc.es          N/A       N/A        Y       32150
12:58 titansmc
12:58 titansmc Task Status of Volume volume1
13:01 arpu joined #gluster
13:08 panina joined #gluster
13:12 msvbhat joined #gluster
13:18 unclemarc joined #gluster
13:18 plarsen joined #gluster
13:19 skoduri joined #gluster
13:45 witsches joined #gluster
13:49 shyam joined #gluster
13:51 edong23 joined #gluster
13:52 tomaz__ joined #gluster
13:52 d0nn1e joined #gluster
13:59 hchiramm joined #gluster
14:00 witsches joined #gluster
14:01 post-factum titansmc, @paste
14:01 post-factum @paste
14:01 glusterbot post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
14:01 post-factum titansmc, ^^
14:02 titansmc http://termbin.com/g1u6
14:03 titansmc can anyone point me to the correct nagios plugin setup?
14:03 titansmc cheers!
14:06 skoduri joined #gluster
14:16 m0zes joined #gluster
14:19 msvbhat joined #gluster
14:21 witsches joined #gluster
14:22 shyam left #gluster
14:28 coreping left #gluster
14:30 squizzi_ joined #gluster
14:30 nbalacha joined #gluster
14:38 shyam joined #gluster
14:41 kotreshhr joined #gluster
14:43 kramdoss_ joined #gluster
14:51 annettec joined #gluster
14:51 primehaxor joined #gluster
14:57 satya4ever joined #gluster
14:58 witsches joined #gluster
15:00 kdhananjay joined #gluster
15:03 bluenemo joined #gluster
15:05 nbalacha joined #gluster
15:09 bowhunter joined #gluster
15:15 hackman joined #gluster
15:26 Gambit15 joined #gluster
15:27 Manikandan joined #gluster
15:30 BitByteNybble110 joined #gluster
15:31 jiffin joined #gluster
15:39 luizcpg left #gluster
15:53 luizcpg joined #gluster
16:03 [diablo] joined #gluster
16:08 wushudoin joined #gluster
16:47 kpease joined #gluster
16:55 kpease_ joined #gluster
17:02 haomaiwang joined #gluster
17:02 haomaiwang joined #gluster
17:08 kpease joined #gluster
17:10 Muthu joined #gluster
17:13 jiffin joined #gluster
17:19 haomaiwang joined #gluster
17:35 kotreshhr left #gluster
18:13 vbellur joined #gluster
18:13 hagarth joined #gluster
18:16 MidlandTroy joined #gluster
18:29 bowhunter joined #gluster
18:31 javi404 joined #gluster
18:52 aj__ joined #gluster
19:03 MidlandTroy joined #gluster
19:06 Caveat4U joined #gluster
19:10 gem joined #gluster
19:21 Gambit15 Hey guys, is there a procedure to convert rep 2 to rep 3 arbiter 1 yet?
20:01 post-factum Gambit15: yes
20:13 gem joined #gluster
20:24 satya4ever joined #gluster
20:31 portante joined #gluster
20:39 om2_ anyone know if the bug in 3.7.14 that causes clients to connect to newly added nodes instead of preexisting nodes with the data on it, was fixed in another version?
20:41 arpu joined #gluster
20:42 ndk_ joined #gluster
20:44 om2_ no one seems to have that answer...  It was a bug noted a while a go by a couple of us using the same version.  Guess it was never addressed?  I will then revert to using 3.4.2 default version on ubuntu instead of using latest stable - that bug caused production downtime nightmare
20:48 mhulsman joined #gluster
20:48 JoeJulian om2_: What's the bugzilla id?
20:49 om2_ I don't know.  I don't have a bugzilla login
20:49 JoeJulian They're super easy to get. ;) If you found a bug and didn't report it, I doubt it's been fixed.
20:50 om2_ Don't see glusterfs here...  https://www.bugzilla.org
20:50 glusterbot Title: Home :: Bugzilla :: bugzilla.org (at www.bugzilla.org)
20:51 JoeJulian file a bug
20:51 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:51 portante joined #gluster
20:52 om2_ JoeJulian:  I would have when I could.  But I am not about to spend multiple hours recreating the scenario to get debug outputs and configs and proof to file a bug.
20:52 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:52 JoeJulian Sorry, that wasn't a command. I was just triggering glusterbot.
20:53 om2_ JoeJulian: I think you said you were going to test this in a lab and check the bug out, so I didn't bother..  NP I'll just use ubuntu 14.04 stable release.
20:53 JoeJulian LOL
20:53 JoeJulian I was planning to work with you, but it's not like I'm paid to do this. When you went away, I went on to the next thing.
20:53 om2_ Thought Glusterfs staff would be interested in the bug... even though I don't have the logs from it anymore
20:54 om2_ I understand.  NP
20:54 om2_ I'm not pointing fingers.  Just seeing if glusterfs project leads would want to check it out.
20:55 om2_ If this bug reoccurs with 3.4.2 then we will have to drop glusterfs for something like windows dfs or hadoop hdfs
20:55 om2_ It really hindered our production uptime
20:55 JoeJulian I'm sure you can get a refund. ;)
20:55 om2_ lmao
20:55 om2_ It's not about money in that sense
20:55 JoeJulian tbh, I don't even remember what the whole issue was.
20:56 om2_ Are there glusterfs engineers that would like to test this out?  I can post details of the scenario, but I can't get logs for it anymore.  If interested, let me know, otherwise, life goes on
20:59 JoeJulian Sure, post your problem to the gluster-devel mailing list as a quasi-bug-report. See if there's enough for them to guess at.
20:59 JoeJulian Who knows. Whatever it is might already be fixed.
21:00 JoeJulian afaict, the last time we talked was back in March. There's been many bugs fixed since then.
21:05 haomaiwang joined #gluster
21:09 ashp left #gluster
21:17 ndk_ joined #gluster
21:20 portante joined #gluster
21:29 om2_ cool
21:29 om2_ thanks JoeJulian
21:30 luizcpg joined #gluster
21:47 vbellur joined #gluster
21:47 hagarth joined #gluster
21:48 ndk_ joined #gluster
21:48 portante joined #gluster
22:06 haomaiwang joined #gluster
22:12 mhulsman joined #gluster
22:19 portante joined #gluster
22:24 ndk_ joined #gluster
22:36 ndk_ joined #gluster
22:45 portante joined #gluster
22:53 zat joined #gluster
22:56 ndk_ joined #gluster
23:07 luizcpg joined #gluster
23:11 plarsen joined #gluster
23:14 haomaiwang joined #gluster
23:25 plarsen joined #gluster
23:32 Caveat4U joined #gluster
23:37 plarsen joined #gluster
23:38 Marbug joined #gluster
23:40 Caveat4U joined #gluster
23:55 arpu joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary