Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 hagarth joined #gluster
00:17 chirino joined #gluster
00:24 yinyin_ joined #gluster
00:27 tdasilva left #gluster
01:02 RicardoSSP joined #gluster
01:02 RicardoSSP joined #gluster
01:07 jag3773 joined #gluster
01:17 msciciel3 joined #gluster
01:18 carnil_ joined #gluster
01:19 efries_ joined #gluster
01:21 mjrosenb1 joined #gluster
01:22 bala joined #gluster
01:23 jmarley joined #gluster
01:23 jmarley joined #gluster
01:27 gdubreui joined #gluster
01:31 lmickh joined #gluster
01:49 vpshastry1 joined #gluster
01:49 vpshastry1 left #gluster
02:22 athe joined #gluster
02:30 jag3773 joined #gluster
02:33 yinyin- joined #gluster
02:36 MeatMuppet joined #gluster
02:40 siel joined #gluster
03:13 nightwalk joined #gluster
04:17 yinyin_ joined #gluster
04:21 Oneiroi joined #gluster
04:39 gdubreui joined #gluster
05:00 benjamin_____ joined #gluster
05:01 yinyin joined #gluster
05:10 ravindran1 joined #gluster
05:19 lalatenduM joined #gluster
05:23 baojg joined #gluster
05:24 baojg joined #gluster
05:34 Humble joined #gluster
05:47 ProT-0-TypE joined #gluster
05:52 RameshN joined #gluster
05:53 baojg_ joined #gluster
05:56 velladecin left #gluster
05:58 baojg joined #gluster
06:04 Ark joined #gluster
06:09 baojg joined #gluster
06:11 baojg joined #gluster
06:14 rahulcs joined #gluster
06:23 kanagaraj joined #gluster
06:29 rgustafs joined #gluster
06:30 baojg joined #gluster
06:36 vimal joined #gluster
06:39 baojg_ joined #gluster
06:41 ravindran1 joined #gluster
06:45 andreask joined #gluster
06:45 harish_ joined #gluster
06:47 ekuric joined #gluster
06:52 dusmant joined #gluster
06:53 ricky-ti1 joined #gluster
06:57 velladecin joined #gluster
06:57 velladecin @ports
06:57 glusterbot velladecin: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
07:04 glusterbot New news from resolvedglusterbugs: [Bug 1086985] Nightly build failure: configure.ac:10: error: possibly undefined macro: m4_esyscmd <https://bugzilla.redhat.com/show_bug.cgi?id=1086985>
07:05 eseyman joined #gluster
07:07 ctria joined #gluster
07:08 ProT-0-TypE joined #gluster
07:09 ajha joined #gluster
07:10 Philambdo joined #gluster
07:23 JoeJulian joined #gluster
07:26 haomaiwang joined #gluster
07:30 haomaiw__ joined #gluster
07:43 harish_ joined #gluster
07:50 rbw joined #gluster
07:55 fsimonce joined #gluster
08:02 rahulcs joined #gluster
08:06 goerk joined #gluster
08:06 goerk exit
08:07 goerk joined #gluster
08:11 Norky joined #gluster
08:14 liquidat joined #gluster
08:17 haomaiwa_ joined #gluster
08:19 Andyy2 joined #gluster
08:26 giannello joined #gluster
08:31 XAT joined #gluster
08:32 XAT Hey guys, did anyone ever try to put maildir on a gluster ?
08:32 XAT My dovecot is extremely slow on it
08:32 saravanakumar1 joined #gluster
08:34 rahulcs joined #gluster
08:40 baojg joined #gluster
08:58 calum_ joined #gluster
09:01 ron-slc joined #gluster
09:05 jmarley joined #gluster
09:05 jmarley joined #gluster
09:43 pvh_sa joined #gluster
09:50 andreask joined #gluster
10:00 svennd joined #gluster
10:01 harish_ joined #gluster
10:02 svennd Will GluserFS (or any dsf) slow down file handling (writing/read) compared to simply mirroring and only using 1 server (and second server for file redudancy) ?
10:08 ctria joined #gluster
10:08 Licenser svennd given enough servers gluster should be faster then a mirror
10:09 svennd but not with a 2 storage server setup ?
10:10 Licenser not sure I think it depends on the mirroring, if you ensure consistancy it will be around the selfe, if you don't guarantee consistancy and are willing to risk data loss then glsuter (or any other system that does) will propably be slower
10:11 Licenser but please take that with a grain of salt, this is simply a educated guess I'm not even actively using Gluster (yet)
10:12 baojg_ joined #gluster
10:12 svennd Thx for your idea, I think your right ...
10:26 harish_ joined #gluster
10:28 xymox joined #gluster
10:34 kanagaraj joined #gluster
10:37 rahulcs joined #gluster
10:45 xymox joined #gluster
10:51 dusmant joined #gluster
10:53 pvh_sa joined #gluster
10:55 Ark joined #gluster
11:09 edward1 joined #gluster
11:10 lalatenduM joined #gluster
11:45 kanagaraj joined #gluster
11:57 rahulcs joined #gluster
12:03 Philambdo joined #gluster
12:09 jmarley joined #gluster
12:09 jmarley joined #gluster
12:10 sm1ly joined #gluster
12:11 sm1ly re2all. I using this man http://www.howtoforge.com/high-availability-storage-with-glusterfs-3.2.x-on-centos-6.3-automatic-file-replication-mirror-across-two-storage-servers for setting up HA but on gluster 3.4 and centos 6.5. but in step where creating vols I got failed and nothing helpful in logs, maybe some debug tool or opitons?
12:11 glusterbot Title: High-Availability Storage With GlusterFS 3.2.x On CentOS 6.3 - Automatic File Replication (Mirror) Across Two Storage Servers | HowtoForge - Linux Howtos and Tutorials (at www.howtoforge.com)
12:18 edward1 joined #gluster
12:22 jag3773 joined #gluster
12:23 vpshastry1 joined #gluster
12:23 vpshastry1 left #gluster
12:25 rahulcs joined #gluster
12:33 benjamin_____ joined #gluster
12:40 sroy_ joined #gluster
12:40 morsik joined #gluster
12:40 morsik hi
12:40 glusterbot morsik: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:41 morsik oh, shut up someone this bot…
12:41 morsik and question… ;f
12:41 morsik does performance.io-thread-count makes 1 request per thread, or one thread can handle multiple requests?
12:42 morsik i can't find suitable information in documentation
12:42 morsik (gluster 3.4)
12:43 rahulcs joined #gluster
12:47 Slashman joined #gluster
12:53 chirino joined #gluster
12:53 jmarley joined #gluster
12:53 jmarley joined #gluster
13:02 ctria joined #gluster
13:22 pvh_sa joined #gluster
13:27 baojg joined #gluster
13:32 jmarley joined #gluster
13:32 jmarley joined #gluster
13:34 lalatenduM morsik, I think it is threads for translator check https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_managing_volumes.md
13:35 glusterbot Title: glusterfs/doc/admin-guide/en-US/markdown/admin_managing_volumes.md at master · gluster/glusterfs · GitHub (at github.com)
13:38 lmickh joined #gluster
13:42 theron joined #gluster
13:43 morsik lalatenduM: this doesn't help me. i know this is io threads number, but what that exactly means…
13:43 morsik io thread can do only one io request? or can do many at once?
13:44 morsik (probably easier to code one per thread :P)
13:44 lalatenduM morsik, not sure , ndevos might have the answer
13:45 lalatenduM morsik, there would async and sync requests too , not sure if we can directly map one thread per IO request
13:47 Ark joined #gluster
13:48 * ndevos never looked at those details...
13:52 jmarley joined #gluster
13:52 jmarley joined #gluster
13:52 lalatenduM morsik, I think you might want to send a mail to gluster-devel mailing list
13:52 B21956 joined #gluster
13:56 rahulcs joined #gluster
13:56 kaptk2 joined #gluster
13:59 dbruhn joined #gluster
14:02 harold_ joined #gluster
14:03 chirino joined #gluster
14:03 jobewan joined #gluster
14:11 wushudoin joined #gluster
14:16 ira__ joined #gluster
14:23 ctria joined #gluster
14:24 gmcwhistler joined #gluster
14:30 lpabon joined #gluster
14:33 baojg joined #gluster
14:37 eightyeight joined #gluster
14:51 baojg joined #gluster
14:54 glusterbot New news from newglusterbugs: [Bug 1086743] Add documentation for the Feature: RDMA-connection manager (RDMA-CM) <https://bugzilla.redhat.com/show_bug.cgi?id=1086743> || [Bug 1086758] Add documentation for the Feature: Changelog based parallel geo-replication <https://bugzilla.redhat.com/show_bug.cgi?id=1086758> || [Bug 1086782] Add documentation for the Feature: oVirt 3.2 integration <https://bugzilla.redhat.com/show_bug.
14:54 andreask joined #gluster
15:01 ctria joined #gluster
15:08 daMaestro joined #gluster
15:10 rahulcs joined #gluster
15:12 ricky-ticky1 joined #gluster
15:12 sjoeboo hey guys is there an upgrade guide/notes for 3.5 by chance?
15:12 sjoeboo i see that the gluster.org docs haven't been updated yet.
15:14 jmarley joined #gluster
15:14 jmarley joined #gluster
15:19 benjamin_____ joined #gluster
15:29 rahulcs joined #gluster
15:34 Georgyo_ joined #gluster
15:36 jbrooks joined #gluster
15:44 ndevos sjoeboo: I think/hope links to that will be included in the official announcement - I expect that for later today
15:46 sjoeboo okay, i'll be watching for it!
15:48 plarsen joined #gluster
15:49 ndevos johnmark, hagarth: just remember about ^ please
15:50 baojg joined #gluster
16:05 diegows joined #gluster
16:06 jmarley joined #gluster
16:06 jmarley joined #gluster
16:07 hagarth joined #gluster
16:11 rahulcs_ joined #gluster
16:12 Mo__ joined #gluster
16:15 jag3773 joined #gluster
16:17 jbd1 joined #gluster
16:27 jmarley joined #gluster
16:27 jmarley joined #gluster
16:30 ngoswami joined #gluster
16:35 hagarth joined #gluster
16:45 rahulcs joined #gluster
16:47 chirino joined #gluster
17:08 necrogami joined #gluster
17:09 necrogami Any idea where i might track down an init.d script for centos the package doesn't seem to come with one.
17:15 dbruhn what version of cent?
17:15 necrogami 6.4
17:16 necrogami the old glusterfs-server no longer exists on epel
17:16 dbruhn which version of gluster?
17:16 necrogami 3.4.0
17:17 dbruhn http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/EPEL.repo/glusterfs-epel.repo
17:17 dbruhn this one?
17:17 dbruhn I don't have the init scripts for 3.4 I am still on 3.3.x
17:19 necrogami gah thanks
17:23 jmarley joined #gluster
17:23 jmarley joined #gluster
17:36 MeatMuppet joined #gluster
17:42 rahulcs joined #gluster
17:46 ctria joined #gluster
17:47 chirino joined #gluster
17:50 hagarth joined #gluster
18:00 theron joined #gluster
18:01 zaitcev joined #gluster
18:02 Matthaeus joined #gluster
18:12 ricky-ticky joined #gluster
18:12 andreask joined #gluster
18:18 chirino joined #gluster
18:34 dbruhn Interesting thing.. on sequential writes I am seeing a bit faster performance on TCP/IP over RDMA with Infiniband
18:55 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <https://bugzilla.redhat.com/show_bug.cgi?id=969461>
19:01 primechuck joined #gluster
19:19 ricky-ticky joined #gluster
19:21 cdez joined #gluster
19:21 rjoseph joined #gluster
19:25 glusterbot New news from newglusterbugs: [Bug 1089054] gf-error-codes.h is missing from source tarball <https://bugzilla.redhat.com/show_bug.cgi?id=1089054>
19:27 baojg joined #gluster
19:40 deeville joined #gluster
19:42 deeville joined #gluster
19:43 deeville joined #gluster
19:43 DanishMan joined #gluster
19:44 deeville joined #gluster
19:44 deeville joined #gluster
19:48 vipulnayyar joined #gluster
20:00 dreville joined #gluster
20:02 qdk joined #gluster
20:26 _dist joined #gluster
20:42 deeville joined #gluster
21:01 Matthaeus1 joined #gluster
21:21 chirino joined #gluster
21:23 warci joined #gluster
21:24 warci hello again all, i have a really bizarre problem that doesn't produce any logging whatsoever
21:24 warci i try to connect on an nfs exported volume with a windows nfs client
21:24 warci but the client jumps straight to a different volume
21:25 warci if i map the volume to a drive, all is well
21:25 warci but i can't access it directly from explorer
21:25 warci any ideas, coz i'm all out
21:26 warci i'll post it on the mailing list too i guess :)
21:29 Joe630 wstupid question time
21:29 Joe630 generic error - what and why http://www.insideottawavalley.com/video/3831763
21:29 Joe630 oops
21:29 glusterbot Title: Video (at www.insideottawavalley.com)
21:30 Joe630 gluster volume create gv0 replica 2 host:brick
21:33 Joe630 what does that even mean
21:34 xathor joined #gluster
21:35 xathor Hello.  I would like to see about mounting a folder inside a glusterfs share.  Ex:  mount -t glusterfs server1:/VOLUME/path/to/folder /path/to/mount
21:35 xathor It seems like currently I am only able to mount the entire volume.  Ex:  mount -t glisters server1:/VOLUME /path/to/mount
21:35 xathor Forgive my autocorrect.
21:35 warci xathor: yeah that's normal
21:36 warci it's because it's a fuse volume
21:36 xathor I'm trying to mount a single folder and the folder does not like symlinks.
21:36 warci what i did was: mount the gluster volume somewhere and point a symbolic link to the subdir
21:36 edward1 joined #gluster
21:37 xathor The software I am trying to run apparently is smart enough to identify the symlink and not follow it.
21:38 warci argh
21:38 warci sorry, then i think you're screwed
21:38 warci it's a fuse limitation so....
21:40 xathor Well, the only reason I am trying to do this single folder is because the cluster seems to have problems with multiple tiny files.
21:40 xathor The software takes a lot of the tiny files and chunks them together into block files... which gluster is happy with.
21:48 xathor Is there a way to get a distributed-replicated volume to better handle lots of tiny files?
21:48 xathor I'm using IB and rdma for the backend.
21:49 xathor 3 servers, two 10 core xeons, 24GB ram, 8x 2TB
21:52 hagarth joined #gluster
21:52 daMaestro joined #gluster
22:00 fidevo joined #gluster
22:11 dbruhn xathor, what kind of storage?
22:11 xathor Hm?
22:11 dbruhn like disk subsystem
22:11 xathor ZFS
22:11 xathor two pools per server
22:11 dbruhn disk counts? type of spindles?
22:12 xathor each server has 8 drives
22:12 dbruhn sorry 7200RPM data, 8 drives
22:12 xathor yeah
22:12 dbruhn Which Infiniband variant are you using?
22:12 xathor Mellanox
22:12 dbruhn speed?
22:12 xathor 20 iirc
22:13 xathor Yeah, Rate: 20
22:13 dbruhn If you use NFS it is known to perform better than the fuse client for stuff like you are dealing with.
22:13 dbruhn I am in the same boat on the small files, I am dealing with everything from 1.6 million per tb to 3.6 million per tb
22:13 xathor lemme try it real quick
22:14 xathor That must be stupid slow.
22:14 dbruhn For me, I build my disks to perform, hardware raid, and fast disks
22:14 xathor This is for a super redundant storage system.
22:14 dbruhn I use a replica 2
22:14 xathor This cluster is just for testing right now.
22:14 xathor Have you made any tweaks to make yours faster?
22:15 dbruhn Gluster is really a product of what you put under it.
22:15 sputnik13 joined #gluster
22:15 xathor I've been tempted to swap out all the drives for SSD's just to test.
22:15 dbruhn Honestly my tweaking has been at the raid level, and system level.
22:15 dbruhn I have an SSD based system
22:15 dbruhn it's fast
22:15 dbruhn I am running QDR infiniband too
22:15 dbruhn that helps
22:15 dbruhn each step up with infiniband has better latency
22:16 xathor What are you getting for GB/s in rdma_bw tests?
22:16 dbruhn be back in 15 min, changing locations
22:16 dbruhn depends on the system, I am not really concerned with throughput on my systems.
22:16 dbruhn brb
22:21 daMaestro joined #gluster
22:25 rjoseph joined #gluster
22:32 Joe630 haha good engrish volume start: gv0: failed: Another transaction is in progress. Please try again after sometime.
22:35 ira joined #gluster
22:53 jag3773 joined #gluster
23:03 dbruhn joined #gluster
23:06 MeatMuppet left #gluster
23:15 dbruhn joined #gluster
23:22 primechuck joined #gluster
23:23 primechu_ joined #gluster
23:23 chirino joined #gluster
23:37 * jbd1 started a fix-layout on his 64T volume back in early March... still waiting for it to finish
23:42 jbd1 gluster volume rebalance status says 140183384366624 files were rebalanced on one node
23:43 jbd1 (though it's a fix layout)
23:43 jbd1 (I don't have 140 trillion files, btw).  Silly glusterfs
23:47 Ark joined #gluster
23:53 chirino joined #gluster
23:59 primechuck joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary