Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 iPancreas maybe have all nodes in the same DC but geo-replicate it to a different DC for disaster recovery purposes
00:04 iPancreas latency inside SoftLayer's data center is less than 0.2 ms
00:04 iPancreas among DCs in Dallas area is around 0.5ms and 1.01ms....
00:05 iPancreas anyone believe that using nodes in different DCs (in Dallas) might be a problem? (considering latency no more that 1.1ms)
00:15 ildefonso joined #gluster
00:26 tobias- joined #gluster
00:27 hagarth joined #gluster
00:53 calisto joined #gluster
01:08 Pupeno joined #gluster
01:08 Pupeno joined #gluster
01:12 badone joined #gluster
01:16 msmith_ joined #gluster
01:29 nangthang joined #gluster
01:46 elico joined #gluster
02:01 Lee- ildefonso, mind sharing how you set up ZFS in AWS? I've been using ZFS for a couple years now, but on dedicated hardware. I also have some VMs at AWS, but I don't quite understand why you'd get better performance with ZFS on AWS (unless I misunderstood).
02:17 nangthang joined #gluster
02:31 haomaiwa_ joined #gluster
02:55 elico joined #gluster
03:06 msmith_ joined #gluster
03:06 ildefonso Lee-, sure, it was mainly an experiment, thus, setup may sound a bit crazy.  Basically I setup 25 ebs SSD volumes (no IOPS thing), and created 5 RAID-Z and stripped across all 5 of them.  Used a memory optimized instance, optimized for IO, I don't remember exactly which instance I picked.  Performance was surprisingly good and stable.  btw, I did this using ZFS on Linux (Ubuntu), and put PostgreSQL on it.  I have the feeling that the ARC cache is actua
03:06 ildefonso lly helping PostgreSQL performance, but more tests are needed.  I have a blog post that I should publish later this week.
03:12 Lee- ARC is a read cache though. Earlier you mentioned sync writes as being a bottleneck for you. ZFS can use a slog device as a write cache for sync writes, but that only makes sense if the slog device has better performance than the pool as a whole. I imagine with 5 vdevs of 5 disks each you had pretty good IOPS and wouldn't need an slog.
03:13 Lee- I do wonder if using a couple of storage blocks as type PIOPS with the max IOPS set as slog would work out. I've not even attempted ZFS in AWS, but then I've not had a need for the applications I'm managing in AWS
03:14 Lee- Right now I'm actually just using the AWS RDS with PIOPS of 1000
03:16 ppai joined #gluster
03:17 elico joined #gluster
03:20 kanagaraj joined #gluster
03:22 lalatenduM joined #gluster
03:25 ildefonso Lee-, it still has the same write issues that is, basically, latency limiting writes.  However, the devices, grouped by ZFS, had a higher performance than a single EBS volume (even with PIOPS).
03:28 ildefonso Lee-, the thing is that PostgreSQL *has to* know that data was in fact  written to disk, so it does synchronous writes (there are ways of avoiding that, but it will put data at risk, and no one really wants to do that).
03:29 Lee- I'd imagine that even a raid 50 (without ZFS) would work wonders. I've not been limited in write latency so it's not something I've needed to test. I am curious as to how 5x vdevs with 5 disks each using ZFS would compare to raid 50 in similar configuration using another file system
03:29 ildefonso no, it doesn't.
03:29 ildefonso raid 50 is slower.
03:30 Lee- I wasn't considering the benefits of the ZIL. my ZFS on bare hardware only utilize a single vdev, but their use case is not i/o bound, so I've not had a need to run more vdevs (yet)
03:32 ildefonso and ZFS has the self-healing thing
03:36 fandi joined #gluster
03:37 ildefonso Lee-, however, ZFS will be slower if you don't have enough RAM.
03:38 Lee- Yeah the high memory requirements is why I've not bothered even trying it in a cloud provider
03:39 bala joined #gluster
03:39 itisravi joined #gluster
03:43 nishanth joined #gluster
03:47 ildefonso have to go, I will be back, likely tomorrow.
03:47 shubhendu joined #gluster
04:05 soumya_ joined #gluster
04:06 msmith_ joined #gluster
04:12 nbalacha joined #gluster
04:16 msmith_ joined #gluster
04:20 aravindavk joined #gluster
04:23 spandit joined #gluster
04:27 SOLDIERz joined #gluster
04:37 suman_d_ joined #gluster
04:40 hchiramm joined #gluster
04:40 raghu joined #gluster
04:42 saurabh joined #gluster
04:46 Stewart joined #gluster
04:50 bala joined #gluster
04:50 kdhananjay joined #gluster
04:50 dusmant joined #gluster
04:53 ndarshan joined #gluster
04:56 lalatenduM joined #gluster
04:58 rafi1 joined #gluster
05:08 hchiramm joined #gluster
05:16 hchiramm joined #gluster
05:25 side_control joined #gluster
05:27 kumar joined #gluster
05:31 badone joined #gluster
05:38 RameshN joined #gluster
05:42 aravindavk joined #gluster
05:43 saurabh joined #gluster
05:46 ndk joined #gluster
05:48 msvbhat joined #gluster
06:07 rjoseph joined #gluster
06:18 atalur joined #gluster
06:26 hagarth joined #gluster
06:27 msvbhat joined #gluster
06:27 radez_g0n3 joined #gluster
06:28 ndk joined #gluster
06:33 meghanam joined #gluster
06:36 Guest9541 joined #gluster
06:38 atalur joined #gluster
06:46 anil joined #gluster
06:51 atalur joined #gluster
07:05 dusmant joined #gluster
07:11 suman_d_ joined #gluster
07:14 Gorian joined #gluster
07:21 livelace joined #gluster
07:31 hchiramm joined #gluster
07:44 glusterbot News from newglusterbugs: [Bug 1177899] nfs: ls shows "Permission denied" with root-squash <https://bugzilla.redhat.co​m/show_bug.cgi?id=1177899>
07:55 side_control joined #gluster
08:19 pprasanth joined #gluster
08:20 bala joined #gluster
08:22 pp__ joined #gluster
08:27 rp__ joined #gluster
08:29 kovshenin joined #gluster
08:36 ppai joined #gluster
08:46 livelace I am testing a simple configuration replication between two hosts. If the host 1 is turned off abnormally (during the client to write data on volume) after recovery I can only see part of the data on the host 1.
08:46 livelace If the client accesess (simple command: file /mnt/glustervolume/file1 ) the data on the mounted volume that doesn't exist on the host 1, the data will be replicate to host 1.
08:51 LebedevRI joined #gluster
09:04 diegows joined #gluster
09:04 dusmant joined #gluster
09:21 XpineX joined #gluster
09:24 ppai joined #gluster
09:40 elico left #gluster
09:48 bala joined #gluster
10:13 Jaspal joined #gluster
10:14 fsimonce joined #gluster
10:20 bala1 joined #gluster
10:23 badone joined #gluster
10:33 zutto joined #gluster
10:36 zutto Could someone shed a light on the stat issue in glusterfs? I have volume with 6 bricks, and at the root folder, ls fails, and inside any of the folders in the volume it actually works
10:36 zutto fails as in just hangs
10:46 ricky-ti1 joined #gluster
10:50 bala1 joined #gluster
10:57 virusuy joined #gluster
10:57 Pupeno joined #gluster
11:05 calum_ joined #gluster
11:39 badone joined #gluster
11:45 glusterbot News from newglusterbugs: [Bug 1177928] Directories not visible anymore after add-brick, new brick dirs not part of old bricks <https://bugzilla.redhat.co​m/show_bug.cgi?id=1177928>
11:54 harish joined #gluster
11:55 badone joined #gluster
12:01 hagarth joined #gluster
12:05 ccha joined #gluster
12:12 badone joined #gluster
12:30 badone joined #gluster
12:31 ghenry joined #gluster
12:34 RameshN joined #gluster
12:56 lalatenduM_ joined #gluster
13:01 ccha joined #gluster
13:06 kovshenin joined #gluster
13:09 kovshenin joined #gluster
13:09 kovshenin joined #gluster
13:17 LebedevRI joined #gluster
14:08 stickyboy joined #gluster
14:46 pdrakeweb joined #gluster
15:00 msmith_ joined #gluster
15:21 georgeh-LT2 joined #gluster
15:23 georgeh-LT2 joined #gluster
15:37 sysadmin-di2e left #gluster
15:39 lmickh joined #gluster
15:42 _Bryan_ joined #gluster
15:47 hchiramm joined #gluster
15:50 ghenry joined #gluster
15:50 ghenry joined #gluster
16:07 suman_d joined #gluster
16:08 kalzz joined #gluster
16:12 neofob joined #gluster
16:12 iPancreas joined #gluster
16:13 side_control joined #gluster
16:13 jobewan joined #gluster
16:22 roost joined #gluster
16:23 hchiramm joined #gluster
16:52 coredump joined #gluster
16:54 vimal joined #gluster
16:57 hchiramm joined #gluster
17:10 meghanam joined #gluster
17:24 Pupeno_ joined #gluster
17:37 hchiramm joined #gluster
17:40 PeterA joined #gluster
17:42 neofob left #gluster
17:51 merlink joined #gluster
18:02 msmith_ joined #gluster
18:15 msmith_ joined #gluster
18:39 calisto joined #gluster
18:45 georgeh-LT2 joined #gluster
19:55 hchiramm joined #gluster
20:03 PatNarciso joined #gluster
20:04 PatNarciso Happy NYE Gluster!
20:18 tom[] and a real swell 2015 to you too PatNarciso
20:18 PatNarciso Thanks Tom[]!
20:48 PatNarciso question/suggestion: Translator cluster/nufa suggests the default local-volume-name is matched based on the hostname of the system.  should the volume naming conventions wiki be updated to include reference of hostname?
20:59 harish joined #gluster
21:12 JoeJulian Which page?
21:13 JoeJulian PatNarciso: ^
21:13 PatNarciso standby.
21:14 PatNarciso http://www.gluster.org/community/document​ation/index.php/Translators/cluster/nufa   && http://www.gluster.org/community/documentati​on/index.php/HowTos:Brick_naming_conventions
21:20 Pupeno joined #gluster
21:29 JoeJulian PatNarciso: Huh... I thought those old translator configuration pages were all marked for archive.
21:31 PatNarciso I land upon them via a google search.
21:35 * PatNarciso is considering using nufa; in a local cluster; for supporting file writes of video editors.
21:36 ricky-ti1 joined #gluster
21:36 PatNarciso the though is, nufa would define an ssd (or striped ssds) for swift write; and... (this is where I'm using google a lot) have slower disks hold the files for long term storage.
21:37 PatNarciso in other words: my exercise is to attempt to create a tiers-without-tears ((like)) solution: http://blog.gluster.org/20​14/01/tiers-without-tears/
21:43 PatNarciso I think to myself: I *cant* be the first person to do this.  use an ssd brick for quicker writes; and rebalance to distribute accordingly.  so I keep google'n.
21:44 _Bryan_ joined #gluster
21:54 JoeJulian Pretty much, yeah. Tiered storage is being actively worked on.
21:55 JoeJulian http://www.gluster.org/community/documentat​ion/index.php/Features/data-classification
21:55 JoeJulian Proposed for 3.7
21:56 kovshenin joined #gluster
21:56 PatNarciso slick -- is there a proposed release date for 3.7?
21:56 PatNarciso april
21:57 PatNarciso April 29 GA.  k.  So, I'm no taking vacation in April.
21:57 PatNarciso *not.  I've made too many typos in 2014.
22:03 JoeJulian hehe
22:24 jvandewege_ joined #gluster
22:27 cyberbootje1 joined #gluster
22:28 d-fence_ joined #gluster
22:29 iPancreas joined #gluster
22:32 radez_g0` joined #gluster
22:35 RobertLaptop joined #gluster
22:40 fandi joined #gluster
22:48 johnnytran joined #gluster
22:48 Telsin joined #gluster
22:48 rotbeard joined #gluster
22:48 JordanHackworth joined #gluster
22:49 weykent joined #gluster
22:49 uebera|| joined #gluster
22:49 uebera|| joined #gluster
22:49 strata joined #gluster
22:49 frankS2 joined #gluster
22:49 sadbox joined #gluster
22:49 Andreas-IPO joined #gluster
22:49 sage_ joined #gluster
22:50 suman_d joined #gluster
22:51 glusterbot joined #gluster
22:53 Bosse joined #gluster
23:20 Pupeno joined #gluster
23:21 roost joined #gluster
23:36 iPancreas joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary