Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 JoeJulian Why are you changing it?
00:06 clyons joined #gluster
00:06 if-kenn I am trying to configure the quick-read translator
00:10 if-kenn JoeJulian: are you not suppose to edit the .vol file directly?  how are you suppose to change the graph of translators?
00:10 JoeJulian if-kenn: To make customizations to the volume configurations, there's a mechanism using the "filter" directory (or is it "filters"...). Not really sure how it works as it's undocumented though.
00:12 dtrainor joined #gluster
00:13 if-kenn JoeJulian: is this a directory that already is suppose to exist?  The command ‘find /var/lib/glusterd/ -name "*filter*”’ produces no results
00:15 if-kenn huh, this seems a bit kludgy: http://www.gluster.org/community/documentation/index.php/Glusterfs-filter
00:16 JoeJulian yeah...
00:34 if-kenn man even with setting all of these options Gluster client is 5x slower than the NFS client:
00:34 if-kenn gluster volume set drupal_assets performance.quick-read on
00:35 if-kenn gluster volume set drupal_assets performance.cache-max-file-size 128KB
00:35 if-kenn gluster volume set drupal_assets performance.cache-refresh-timeout 10
00:35 if-kenn gluster volume set drupal_assets performance.cache-size 1024MB
00:35 if-kenn gluster volume set drupal_assets performance.io-thread-count 16
00:35 dtrainor joined #gluster
00:35 julim joined #gluster
00:36 if-kenn left #gluster
00:36 if-kenn joined #gluster
00:40 tdasilva joined #gluster
01:18 gmcwhistler joined #gluster
01:23 bala joined #gluster
01:36 vimal joined #gluster
01:36 jmarley_ joined #gluster
01:39 chirino joined #gluster
01:40 gildub joined #gluster
01:42 rjoseph joined #gluster
01:54 haomaiwa_ joined #gluster
01:57 harish joined #gluster
02:04 wgao joined #gluster
02:05 if-kenn joined #gluster
02:08 nishanth joined #gluster
02:09 _Bryan_ joined #gluster
02:13 haomaiwa_ joined #gluster
02:13 gildub joined #gluster
02:36 gmcwhistler joined #gluster
02:59 hagarth joined #gluster
03:02 vu joined #gluster
03:12 if-kenn joined #gluster
03:15 spandit joined #gluster
03:19 vu joined #gluster
03:24 bharata-rao joined #gluster
03:48 kanagaraj joined #gluster
03:49 itisravi joined #gluster
03:50 recidive joined #gluster
03:54 haomaiwang joined #gluster
04:00 haomaiw__ joined #gluster
04:01 haomai___ joined #gluster
04:06 haomaiwa_ joined #gluster
04:08 RameshN joined #gluster
04:09 shubhendu joined #gluster
04:13 bharata-rao joined #gluster
04:14 aronwp joined #gluster
04:16 aronwp anybody out there has any experience with a glusterfs mount suddenly unmounting?
04:16 aronwp seems to happen once in a wile
04:16 aronwp *while
04:23 aronwp joined #gluster
04:28 aronwp seems like i had metadata self heal fail and gluster unmounted
04:28 rjoseph joined #gluster
04:28 aronwp anybody mind taking a look at my error log http://pastie.org/9543827
04:28 glusterbot Title: #9543827 - Pastie (at pastie.org)
04:32 aronwp joined #gluster
04:38 Rafi_kc joined #gluster
04:38 rafi1 joined #gluster
04:41 anoopcs joined #gluster
04:44 ndarshan joined #gluster
04:47 atinmu joined #gluster
04:49 ppai joined #gluster
04:50 jiffin joined #gluster
04:53 jtux joined #gluster
04:55 deepakcs joined #gluster
04:57 hagarth joined #gluster
04:58 nbalachandran joined #gluster
05:00 lyang0 joined #gluster
05:01 meghanam joined #gluster
05:01 meghanam_ joined #gluster
05:03 bharata-rao joined #gluster
05:06 RioS2 joined #gluster
05:07 lyang0 joined #gluster
05:14 huleboer joined #gluster
05:17 RioS2 joined #gluster
05:21 tom[] joined #gluster
05:22 toordog joined #gluster
05:25 shubhendu_ joined #gluster
05:27 huleboer joined #gluster
05:28 fubada joined #gluster
05:30 raghu joined #gluster
05:33 Philambdo joined #gluster
05:33 anoopcs joined #gluster
05:33 LebedevRI joined #gluster
05:49 kdhananjay joined #gluster
05:52 atalur joined #gluster
05:53 nshaikh joined #gluster
05:53 aronwp joined #gluster
05:53 RaSTar joined #gluster
05:56 karnan joined #gluster
05:57 aronwp joined #gluster
06:04 MacWinner joined #gluster
06:06 bala joined #gluster
06:10 navid__ joined #gluster
06:19 soumya joined #gluster
06:21 soumya joined #gluster
06:28 aronwp joined #gluster
06:32 lalatenduM joined #gluster
06:35 RaSTar joined #gluster
06:38 MacWinner joined #gluster
06:41 hagarth joined #gluster
06:42 saurabh joined #gluster
06:53 ekuric joined #gluster
07:08 ramteid joined #gluster
07:10 hagarth joined #gluster
07:10 haomaiwang joined #gluster
07:11 RameshN joined #gluster
07:13 rgustafs joined #gluster
07:14 getup- joined #gluster
07:17 haomai___ joined #gluster
07:21 glusterbot New news from newglusterbugs: [Bug 1130023] [RFE] Make I/O stats for a volume available at client-side <https://bugzilla.redhat.com/show_bug.cgi?id=1130023>
07:23 rjoseph joined #gluster
07:27 fsimonce joined #gluster
07:28 Philambdo joined #gluster
07:31 aronwp joined #gluster
07:31 haomaiwa_ joined #gluster
07:31 hagarth joined #gluster
07:34 kumar joined #gluster
07:41 RaSTar joined #gluster
07:43 MickaTri joined #gluster
08:02 liquidat joined #gluster
08:03 mhoungbo joined #gluster
08:08 rjoseph joined #gluster
08:12 Pupeno joined #gluster
08:17 Gabou http://blog.gluster.org/author/zbyszek/
08:17 Gabou I've followed that to enable SSL.. Do I need to replicate ssl files on both servers ?
08:32 aronwp joined #gluster
08:33 rtalur_ joined #gluster
08:34 richvdh joined #gluster
08:36 rtalur__ joined #gluster
08:38 andreask joined #gluster
08:50 meghanam joined #gluster
08:50 soumya joined #gluster
08:50 meghanam_ joined #gluster
08:51 Gabou Okay I found out :3
08:51 bazzles joined #gluster
08:52 glusterbot New news from newglusterbugs: [Bug 1140549] DHT: Rebalance process crash after add-brick and `rebalance start' operation <https://bugzilla.redhat.com/show_bug.cgi?id=1140549>
08:53 jiffin joined #gluster
08:58 rjoseph joined #gluster
09:06 lyang0 joined #gluster
09:12 hagarth joined #gluster
09:18 shubhendu_ joined #gluster
09:19 ndarshan joined #gluster
09:20 lyang0 joined #gluster
09:22 glusterbot New news from newglusterbugs: [Bug 1140556] Core: client crash while doing rename operations on the mount <https://bugzilla.redhat.com/show_bug.cgi?id=1140556>
09:23 nishanth joined #gluster
09:28 rjoseph joined #gluster
09:28 aravindavk joined #gluster
09:32 MickaTri Hi, any recommendations about how much RAM do i need to buy ? 1GB per x TB
09:42 haomaiwa_ joined #gluster
09:46 gmcwhistler joined #gluster
09:51 nshaikh joined #gluster
10:02 haomaiwa_ joined #gluster
10:06 meghanam_ joined #gluster
10:06 meghanam joined #gluster
10:07 jiffin joined #gluster
10:07 soumya joined #gluster
10:08 rtalur__ joined #gluster
10:25 rgustafs joined #gluster
10:33 edward1 joined #gluster
10:40 meghanam_ joined #gluster
10:40 soumya joined #gluster
10:40 meghanam joined #gluster
10:42 kkeithley1 joined #gluster
10:44 wgao joined #gluster
10:50 rjoseph joined #gluster
10:51 rtalur__ joined #gluster
10:52 glusterbot New news from newglusterbugs: [Bug 1138229] Disconnections from glusterfs through libgfapi <https://bugzilla.redhat.com/show_bug.cgi?id=1138229> || [Bug 1117822] Tracker bug for GlusterFS 3.6.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1117822>
10:58 ricky-ti1 joined #gluster
10:59 getup- joined #gluster
11:02 ndarshan joined #gluster
11:02 RameshN joined #gluster
11:02 nishanth joined #gluster
11:03 shubhendu_ joined #gluster
11:18 rtalur__ joined #gluster
11:18 recidive joined #gluster
11:20 andreask joined #gluster
11:22 glusterbot New news from newglusterbugs: [Bug 1122581] Sometimes self heal on disperse volume crashes <https://bugzilla.redhat.com/show_bug.cgi?id=1122581> || [Bug 1140626] Sometimes self heal on disperse volume crashes <https://bugzilla.redhat.com/show_bug.cgi?id=1140626>
11:26 baoboa joined #gluster
11:28 meghanam joined #gluster
11:28 meghanam_ joined #gluster
11:29 soumya joined #gluster
11:34 mojibake joined #gluster
11:34 diegows joined #gluster
11:49 jmarley_ joined #gluster
11:50 elico joined #gluster
11:51 harish joined #gluster
11:52 glusterbot New news from newglusterbugs: [Bug 1140628] Volume option set command input not consistent <https://bugzilla.redhat.com/show_bug.cgi?id=1140628>
11:58 hagarth @channelstats
11:58 glusterbot hagarth: On #gluster there have been 359285 messages, containing 14031396 characters, 2316536 words, 8431 smileys, and 1172 frowns; 1706 of those messages were ACTIONs. There have been 159554 joins, 4260 parts, 155671 quits, 28 kicks, 856 mode changes, and 7 topic changes. There are currently 246 users and the channel has peaked at 250 users.
12:00 getup- joined #gluster
12:03 meghanam joined #gluster
12:03 meghanam_ joined #gluster
12:04 bennyturns joined #gluster
12:04 soumya joined #gluster
12:05 deeville joined #gluster
12:10 jmarley_ joined #gluster
12:13 itisravi_ joined #gluster
12:19 rtalur__ joined #gluster
12:21 aronwp joined #gluster
12:25 rjoseph joined #gluster
12:38 getup- joined #gluster
12:39 julim joined #gluster
12:39 jmarley joined #gluster
12:40 Guest89192 joined #gluster
12:42 plarsen joined #gluster
12:43 aronwp joined #gluster
12:51 theron joined #gluster
12:51 rtalur__ joined #gluster
12:56 sputnik13 joined #gluster
13:01 MickaTri Hi, what is the file system wich is recommended by glusterfs ? XFS, ext4/3 ?
13:02 soumya joined #gluster
13:04 skippy XFS
13:13 MickaTri is that change between version ?
13:14 MickaTri http://gluster.org/community/documentation/index.php/Gluster_3.2:_Checking_GlusterFS_Minimum_Requirements
13:15 MickaTri they recommend Ext4
13:17 hagarth joined #gluster
13:20 MickaTri why ?
13:20 bala joined #gluster
13:25 nated joined #gluster
13:25 skippy not sure of the specifics, but I believe XFS offers some useful features to the Gluster internals
13:26 MickaTri ok thx ;)
13:26 Alex There's a pretty bad bug with ext4 too
13:26 Alex I saw it as unicode not being valid in filenames
13:31 LHinson joined #gluster
13:35 LHinson1 joined #gluster
13:36 aronwp joined #gluster
13:39 sputnik13 joined #gluster
13:40 jmarley joined #gluster
13:41 dreville joined #gluster
13:43 aronwp_ joined #gluster
13:43 jobewan joined #gluster
13:45 bala joined #gluster
13:47 mrEriksson Hello! Question, is it possible to get glusterd to listen to multiple addresses? I've got a couple of multi-homed hosts and I've configured glusterd to only bind to the address that is in the vlan dedicated for storage. But this causes problems with rebalancing etc, since gluster tries to connect to localhost, not the actual interface address that glusterd is bound to. Any ideas?
13:51 dockbram joined #gluster
13:53 aravindavk joined #gluster
13:53 side_control joined #gluster
13:54 LebedevRI joined #gluster
13:59 LHinson joined #gluster
13:59 wushudoin| joined #gluster
14:05 plarsen joined #gluster
14:22 tdasilva joined #gluster
14:25 bala joined #gluster
14:27 aravindavk joined #gluster
14:29 if-kenn joined #gluster
14:34 LebedevRI joined #gluster
14:43 itisravi_ joined #gluster
14:53 soumya joined #gluster
14:57 _Bryan_ joined #gluster
14:58 deepakcs joined #gluster
15:00 jbrooks joined #gluster
15:00 if-kenn joined #gluster
15:04 LHinson1 joined #gluster
15:07 if-kenn_ joined #gluster
15:08 aronwp joined #gluster
15:08 if-kenn left #gluster
15:09 mojibake joined #gluster
15:09 if-kenn joined #gluster
15:09 mojibake joined #gluster
15:09 if-kenn_ left #gluster
15:10 if-kenn Is anyone successfully using quick-read? Does anyone know if it is still supported in 3.5.2 or abandoned? I have set the following via "gluster volume set VOL_NAME" to no change in serving small files: performance.quick-read on, performance.cache-max-file-size 128KB, performance.cache-refresh-timeout 10, performance.cache-size 1024MB, performance.io-thread-count 16. Thanks.
15:12 if-kenn If this is not the right venue to ask this question, please point me where I should.
15:17 Gugge joined #gluster
15:18 Slashman joined #gluster
15:27 luis_silva joined #gluster
15:27 luis_silva Hey quick question - georeplication is bidirectional right?
15:31 ndevos luis_silva: sorry, it is not
15:33 luis_silva ok cool, thanks.
15:33 luis_silva That's good to know.
15:37 rotbeard joined #gluster
15:46 semiosis if-kenn: quick-read is on by default
15:46 semiosis at least it is on 3.4.2
15:46 semiosis probably all versions since 3.1
15:48 gmcwhistler joined #gluster
15:54 dtrainor joined #gluster
15:55 dtrainor joined #gluster
15:59 NigeyS joined #gluster
16:00 NigeyS afternoon :)
16:02 NigeyS semiosis you about ?
16:02 semiosis hi NigeyS. whats up?
16:02 NigeyS hey, can i just check with you about mounting the volume on the 2 servers?
16:03 NigeyS fs01 i have it mounting fs01 obviously .. but fs02 .. as its a replica, do i mount from fs01, or fs02? slightly confused ...
16:04 semiosis NigeyS: glusterfs is fully distributed, no master/primary, all replicas are equal.  you can mount from any server in the pool.  see ,,(mount server)
16:04 glusterbot NigeyS: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
16:05 NigeyS aha i see, oki, thats fine, many thanks, again :)
16:05 semiosis yw
16:05 PeterA joined #gluster
16:21 kumar joined #gluster
16:25 NigeyS semiosis got that locking happening again now, wheni do df or try to ls the /sites mount :|
16:25 semiosis that locking?
16:25 semiosis what locking?
16:26 NigeyS df or ls on the mount hangs indefinately
16:26 if-kenn_ joined #gluster
16:26 if-kenn joined #gluster
16:30 RameshN joined #gluster
16:34 aronwp joined #gluster
16:36 sijis left #gluster
16:36 semiosis NigeyS: is it possible you've got different versions of glusterfs on clients & servers?
16:37 NigeyS both servers running 3.4.2 let me check client versions ...
16:38 NigeyS i had this problem last time i just cannot remember how i fixed it
16:38 semiosis you could pastie.org the client log file, that might help
16:38 NigeyS sure, 2 secs
16:39 chirino joined #gluster
16:42 NigeyS http://pastie.org/9545514
16:42 glusterbot Title: #9545514 - Pastie (at pastie.org)
16:43 semiosis some issues here
16:43 semiosis client lost connection to the servers
16:44 NigeyS yip, says clients are different version to, shall i update to 3.3 from the ppa ?
16:45 semiosis dont trust the version reported in the log.  check the version by running glusterfs --version
16:46 NigeyS 3.4.2
16:47 NigeyS and trying to update from ppa gives me a file not found error
16:47 NigeyS Failed to fetch http://ppa.launchpad.net/semiosis/ubuntu-glusterfs-3.3/ubuntu/dists/trusty/main/binary-amd64/Packages  404  Not Found
16:47 semiosis you dont want 3.3
16:48 NigeyS ah thats true, silly me
16:54 bennyturns joined #gluster
16:59 theron joined #gluster
17:01 gmcwhistler joined #gluster
17:05 if-kenn joined #gluster
17:05 if-kenn_ joined #gluster
17:05 dtrainor joined #gluster
17:05 sputnik13 joined #gluster
17:06 charta joined #gluster
17:14 NigeyS semiosis same thing with 3.4.5 tried to manually mount and it just hangs the FS
17:14 semiosis NigeyS: did you reboot servers after upgrading?
17:14 semiosis maybe try stop/start the volume
17:15 NigeyS yup, its asif the websites volume isnt there
17:17 tom[] joined #gluster
17:23 R0ok__ joined #gluster
17:25 if-kenn_ semiosis: if quick-read is turned on by default, than this is some serious performance issues as Gluster server/Gluster client is 4.6 times slower than NFS server/NFS client, 14.5 time slower than Gluster server/NFS client.  i thought i would see how performance changes turning off performance.quick-read, performance.io-cache, performance.read-ahead.  there was no change in performance and i restarted volumes and glusterd and reran
17:25 if-kenn_ to make sure.
17:29 semiosis if-kenn_: neat
17:30 if-kenn_ neat?
17:30 semiosis i mean, that's interesting.
17:31 if-kenn_ ah
17:31 davemc joined #gluster
17:31 if-kenn_ it looks like quick-read makes no difference, is there something blocking it from activating?
17:31 davemc good day, all
17:31 NigeyS semiosis when i create the volume should i make a directory for it like /data/glusterfs/websites ? ..
17:32 semiosis if-kenn_: what are you trying to accomplish?
17:32 kryl joined #gluster
17:32 semiosis hi
17:32 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:32 semiosis davemc: ^^^
17:32 semiosis NigeyS: yes, or gluster will make it for you
17:33 hchiramm_ joined #gluster
17:34 kryl if I stop glusterfs-server the bricks stay actives !
17:34 kryl how to stop everything ?
17:35 kryl do I need to kill the pids ?
17:35 Gabou hi
17:35 glusterbot Gabou: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:35 Gabou lol
17:35 semiosis charta: see ,,(php)
17:35 glusterbot charta: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
17:35 kryl hi
17:35 glusterbot kryl: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:36 semiosis charta: first of all, you want to eliminate include/require calls if possible, using autoloading instead.
17:36 JoeJulian kryl: pkill -f gluster
17:36 semiosis charta: if you can't do that, then you can use APC to cache php code, set APC to disable stat calls.  this should improve performance a lot, but you will have to reload apache when your php code changes
17:36 charta yes, but I have 12 programmers on my head craving about symphony and so far it's 350 php files, will be 4 times more or so
17:36 kryl isn't it too rude ?
17:37 semiosis charta: finally, you should optimize your include path so that framework paths come first, because looking up files in dirs where they dont exist (negative lookups) is expensive on glusterfs
17:37 if-kenn_ semiosis: i am making a proof of concept for trying to server assets for a major US city’s website.  the website just static files but will be drupal with all code base is local, the assets (images, documents, aggregated css/js) are on gluster as they need to be shared horizontally.  i lessened the stack down to minimal for testing: web server (apache, mounting gluster), 2 glusterd with 2 brick replication, load tester pounding wi
17:37 if-kenn_ siege.  there is 0 PHP involved in this test as i made everything static files to perform these test.
17:37 JoeJulian kryl: It's that way because there are times when you want to restart the management daemon without restarting all your bricks, self-heal daemons and nfs server.
17:38 if-kenn left #gluster
17:38 semiosis if-kenn_: can you please ,,(pasteinfo)
17:38 glusterbot if-kenn_: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:38 JoeJulian if-kenn_: Try to ensure that your content is treed and not all in one huge directory.
17:38 NigeyS semiosis no joy, new volume ran this .. ubuntu@fs01:~$ sudo mount -t glusterfs fs01:/websites /mnt/sites  .. filesystem locks :|
17:39 semiosis i suspect if-kenn_ is replicating over a high latency link, just a wild guess
17:39 kryl JoeJulian, ok it's done thank you
17:39 semiosis NigeyS: ,,(pasteinfo)
17:39 glusterbot NigeyS: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:40 NigeyS https://dpaste.de/UVTN
17:40 glusterbot Title: dpaste.de: Snippet #282611 (at dpaste.de)
17:40 if-kenn_ semiosis: in production will be replicated over a low latentancy link, current test is in an AWS VPN from two zones in us-east-1.
17:40 semiosis NigeyS: you are mounting the client over a brick path.  you can't do that.
17:41 semiosis if-kenn_: inter-AZ within a single region in AWS is low enough latency
17:41 NigeyS umm..ok..
17:41 charta semiosis: thanks for tips. This is what I did to improve performace - mounted volumes as NFS, used fsc option together with cachefilesd. That made dramatic improvement from 4secs load time, to about 0.02-0.2 secs
17:41 R0ok__ joined #gluster
17:42 semiosis charta: cool
17:42 aronwp joined #gluster
17:43 semiosis if-kenn_: you should put a front end cache in front for static assets.  we like varnish around here.
17:43 semiosis if-kenn_: for the drupal code, see my comments to charta above re: optimizing php on glusterfs
17:43 if-kenn_ semiosis: http://ur1.ca/i65p2
17:43 glusterbot Title: #132874 Fedora Project Pastebin (at ur1.ca)
17:43 charta semiosis: what about cache buffers on client nodes, any way to optimize that?
17:43 semiosis if-kenn_: my advice would be to not bother with glusterfs options
17:43 NigeyS so if i make /sites where i want to actually mount the data i do mount -t glusterfs fs01:/websites /sites ?
17:43 semiosis yes
17:44 NigeyS oki, thanks, didnt realise i was mounting the dam bricks :/
17:44 semiosis yw
17:44 if-kenn_ semiosis: we will be implementing varnish, there is not any PHP servered off of a gluster mount all servers have local code base
17:44 semiosis oh thats great
17:44 NigeyS this would be why i was told better to put the bricks in /data/glusterfs/bla  i take it  ?
17:45 semiosis NigeyS: that sounds reasonable to me
17:45 NigeyS righty, again, thanks
17:45 semiosis yw
17:45 if-kenn_ semiosis: i need to present client with options and benchmarks as why we picked Gluster vs CEPH vs etc.
17:46 semiosis if-kenn_: also, what instance types are you using?  what distro?  what version of glusterfs?  these can all have an impact on performance
17:46 semiosis if-kenn_: and what kind of storage?  ebs I hope!
17:48 if-kenn_ JoeJulian: content is only 71 files spread across 44 directories
17:48 if-kenn_ semiosis: all instance types are m3.xlarge, centos 6.5, gluster 3.5.2 and EBS
17:49 lalatenduM joined #gluster
17:50 semiosis if-kenn_: ok, so you should be able to max out the ebs throughput with those instances
17:50 kryl joined #gluster
17:50 semiosis with multiple threads
17:51 semiosis unless you're using the new ssd ebs, i havent tried that
17:51 if-kenn_ i am using magnetic
17:51 semiosis ok
17:51 semiosis so, is there a problem?
17:52 if-kenn_ this is the seige command i am using, the url.txt has 63 url request in it: siege --concurrent=63 --file=urls.txt --benchmark --log=/root/siege-gluster-gluster-tuned-fs.log --reps=500
17:52 if-kenn_ the problem is the performance.
17:52 semiosis ah right
17:52 semiosis well, try ssd ebs
17:53 semiosis most common performance problem is with small files, because every time the file is accessed gluster checks replication metadata.  this involves several round trips from the client to the servers (and disks) which dominates the time to open the file for small files
17:53 semiosis once the file is open the data can move really fast, which makes this less of a problem for large files
17:53 semiosis but if you're going to put varnish in front, then why is this a concern?
17:53 aronwp joined #gluster
17:55 portante joined #gluster
17:55 mojibake joined #gluster
17:55 if-kenn_ because i am trying to tune all aspects individually so that they are the best collectively.  also as i mentioned before to be able to provide benchmarking information to the client .
17:56 if-kenn_ on top of that there is the issue of authenticated vs anonymous user that complicates varnish at time
17:56 if-kenn_ *times
17:57 semiosis an alternative would be to use nfs clients instead of fuse.  they have built in caching which may help for small files.  the downside is there's no HA for nfs clients
17:57 if-kenn_ semiosis: is it not odd that there is no performance difference between having quick-read on or off?
17:58 if-kenn_ semiosis: that is exactly the dilemma
17:59 if-kenn_ i could add lsyncd to the mix to sync js and css but i was hoping for a simpler approach.
18:00 semiosis you could also enable mod_cache in apache
18:00 if-kenn_ (lsyncd and local file)
18:01 NigeyS semiosis what does gluster concider a small file ?
18:01 semiosis NigeyS: gluster doesnt care
18:02 NigeyS but if it performs badly on small files, there must be some kind of threshold where it's concidered a large file ?
18:02 if-kenn_ NigeyS: JoeJulian has a good write up here: http://joejulian.name/blog/nfs-mount-for-glusterfs-gives-better-read-performance-for-small-files/
18:02 glusterbot Title: NFS mount for GlusterFS gives better read performance for small files? (at joejulian.name)
18:02 semiosis NigeyS: depends on your deployment
18:02 NigeyS i see
18:02 semiosis NigeyS: infiniband + ssd will push that limit very very low
18:02 NigeyS if-kenn_ thanks i'll take a read
18:04 semiosis if-kenn_: tbqh, if your needs are simple enough to be satisfied by lsyncd then by all means go for it
18:04 semiosis if-kenn_: distributed filesystems are hard.  dont use one unless you have to.
18:05 semiosis if-kenn_: hey maybe you can put the static files in S3 & cloudfront!
18:05 * semiosis goes to lunch
18:06 if-kenn_ semiosis: understood. we need the HA and replication for assets like documents, i think CSS/JS will be the edge case that needs the lsyncd, CDN will be something in the next phase of the project as it requires much more coordination with dev team.
18:07 kryl joined #gluster
18:07 JoeJulian I still hate nfs mounts for "performance" reasons.
18:07 semiosis +1
18:08 JoeJulian If you're relying on your storage for read performance for a customer-facing anything, you're doing it wrong.
18:10 if-kenn_ JoeJulian: this is all simple benchmarking at this time to understand what the best implementation will be.
18:12 aronwp joined #gluster
18:17 MacWinner joined #gluster
18:20 plarsen joined #gluster
18:36 tom[] joined #gluster
18:42 nshaikh joined #gluster
18:46 aronwp joined #gluster
18:48 primeministerp joined #gluster
18:54 glusterbot New news from newglusterbugs: [Bug 1140818] symlink changes to directory, that reappears on removal <https://bugzilla.redhat.com/show_bug.cgi?id=1140818>
18:54 toordog-work glusterbot afr namespace
18:54 toordog-work what is AFR namespace^
18:58 aronwp joined #gluster
19:02 charta semiosis: ubuntu trusty 14.04LTS: /etc/init.d/glusterfs-server stop doesn't do anything, as well as service glusterfs-server stop
19:03 semiosis that's unusual
19:19 MacWinner joined #gluster
19:20 ira joined #gluster
19:23 vu_ joined #gluster
19:35 theron_ joined #gluster
19:36 clutchk joined #gluster
19:37 foster_ joined #gluster
19:37 if-kenn joined #gluster
19:39 LebedevRI_ joined #gluster
19:39 rturk joined #gluster
19:40 jobewan joined #gluster
19:50 zerick joined #gluster
19:59 bazzz joined #gluster
20:00 bazzz g'day... is it stable/reliable enough to use NFS/Gluster for VMWare as datastore backend? I'm just reading into stuff, have not yet tested anything
20:02 toordog-work NFS maybe not
20:02 toordog-work use client
20:02 bazzz Use Client?
20:03 toordog-work glusterfs client
20:03 diegows joined #gluster
20:03 bazzz glusterfs client on the vmware nodes?
20:03 toordog-work NFS should be good to read many small files *as it cache locally* but for write, it is better using the client *which doesn't do caching*
20:03 toordog-work everywhere you wnat to mount the glusterfs volume
20:04 toordog-work be careful as glusterfs is not a cluster filesystem, it is a distributed file system, the difference is in the mechanism to deal with concurrent access
20:05 bazzz VMware datastores would be accessed by one client at a time, not more. It just needs to be replicated in case of failure in the storage backend
20:05 toordog-work ok so glusterfs should do fine
20:06 bazzz I don't know if one could install a glusterfs client on the vmware node, supported is iSCSI, NFS or Fibrechannel
20:06 toordog-work i never tried what you are doing, but i think it should work.  Just keep in mind that your datastore won't be a vmfs filesystem.
20:07 toordog-work your glusterfs can act as an iscsi target
20:07 toordog-work your glusterfs server i meant
20:07 gmcwhistler joined #gluster
20:07 toordog-work but at this point i don't know how you will interact with your glusterfs volume ... as it is not a block device file system.
20:08 toordog-work so iscsi you have to create a filesystem and access block level, ...
20:08 bazzz Regarding the huge file sizes one usually deals with within VMWare I read that it won't make a big difference whether you use iSCSI or NFS
20:08 toordog-work the write will make a difference
20:08 toordog-work in read there is no difference
20:09 toordog-work but you might not have a choice in the end if iscsi doesn't work with glusterfs
20:09 bazzz They used NAS servers which can export logical volumes via iSCSI and NFS and the difference with read and write was below 10%
20:09 VeggieMeat joined #gluster
20:09 toordog-work ok
20:09 toordog-work in the end you only need replication
20:10 toordog-work my worry was about the local caching of NFS and it might not have the latest version of data ..
20:10 toordog-work but i think it doesn't apply in your context
20:10 bazzz I basically have two options (after evaluating half a dozen): take some closed box hardware (e.g. bigger synology boxes) use them as active-passive ("HA replication") or some distributed filesystem with NFS on top (e.g. GlusterFS)
20:10 bazzz That's at least my current understanding
20:10 toordog-work if you have time and ressource, you should consider documenting what you do for other people, i didn't see a whitepaper about this setup
20:11 bazzz sure.
20:12 bazzz I would put RAID-6 with SSDs below Gluster for test setup. Would that make sense?
20:12 bazzz two servers.
20:13 bazzz Connected via 10GBe
20:13 toordog-work not sure to understand
20:13 bazzz Is it somehow redundant to put Gluster with replication on top of Raid-6?
20:13 toordog-work yes
20:13 haomaiwa_ joined #gluster
20:13 toordog-work but it's is a matter of personal feeling as well
20:13 bazzz Redundant and unneccessary ot redundant and super safe?
20:13 bazzz or
20:14 toordog-work here there is a feeling of trust to hvae raid below gluster as we didn,t have much expereince iwth it
20:14 bazzz same here
20:14 toordog-work but overall you are losing performance for this extra redundancy that isn't required since glusterfs is design to be robust
20:15 bazzz Local SSDs would hit SATA-600 limits anyways I guess
20:15 bazzz So there shouldn't be to much overhead with local RAID-6 (I hope)
20:16 toordog-work i guess so
20:16 bazzz I have 5 servers at hand (DL380 G8 with dual 10GBe) two Netgear XS708e, three servers for VMWare, two e.g. for Gluster (on 14.04 LTS here)
20:16 Lee- use multipath iscsi to 2 distinct targets on different networks. raid1 in hypervisor, pass raid1'd block device to guest vm ;)
20:17 toordog-work wanna share your toys? :) :) :) :)
20:17 bazzz my toys? never :D
20:17 semiosis i think people have used nfs for vmware on gluster
20:18 bazzz The VMs are mostly Windows Servers with Bitlocker encrypted data...
20:18 bazzz (not my choice)
20:18 semiosis bummer
20:18 toordog-work ok
20:18 Lee- i haven't read about how gluster handles partial file changes, which is what would be an important factor for hosting a vm image on gluster. I'm new to gluster in general though
20:19 bazzz Lee-: shouldn't that be flawless? I mean it's no rsync after all...
20:19 toordog-work bazzz you will also have a challenge over performance playing with stripe but semiosis would be better to explain if it applies to your cas.e
20:19 toordog-work bazz you could be surprise about how glusterfs works :P
20:19 bazzz why is that?
20:19 toordog-work bazzz why you want to use only 2 server for storage?
20:19 bazzz i don't have more :D
20:19 Lee- bazzz, I'm not suggesting there would be corruption, but in terms of performance.
20:20 toordog-work bazzz if i'm not wrong, i think rsync is involved at some point.
20:20 semiosis nope
20:20 toordog-work also distributed in glusterfs is based per file, not per block
20:20 semiosis rsync is used internally for geo-replication, that's it
20:20 toordog-work ahh ok i knew rsync was involved somewhere :)
20:20 bazzz Here's someone who's been using Gluster for VMWare: http://myitnotes.info/doku.php?id=en:jobs:linux_gluster_nfs_for_vmware
20:20 glusterbot Title: en:jobs:linux_gluster_nfs_for_vmware [IT Notes about: Juniper, Cisco, Checkpoint, FreeBSD, Linux, Windows, VmWare....] (at myitnotes.info)
20:20 toordog-work semiosis is stripe is based per file also or it is per block?
20:21 bazzz And that article seems like two years old
20:21 semiosis stripe is probably not what you want
20:21 bazzz what do i want then? :D
20:21 toordog-work even for big file like vmware vhd?
20:21 semiosis probably just replicated, possibly distributed-replicated
20:22 semiosis besides, if you only have two servers, you're precluded from using stripe :)
20:22 bazzz Integrity is top priority, then avialability, then performance
20:22 toordog-work bazzz related to your storage server, why do you wnat to use only 2 of them for storage?
20:22 bazzz we have so many VMs that we need 3 servers here (192GB RAM each)
20:22 LHinson joined #gluster
20:22 toordog-work glusterfs is made to use every small piece of free storage available
20:23 bazzz we need to compensate one server failure in frontend and one in storage
20:23 toordog-work you could use 5 server for running the vms
20:23 toordog-work and use all hdd of all server
20:23 toordog-work and host the server storage on 2 of the server *NFS server*
20:23 toordog-work all server would be part of the pool of server and host brick
20:24 toordog-work while 2 of the server would be the gluster server and nfs server
20:24 toordog-work those 2 storage server would just have some overhead to manage the glusterfs
20:24 glusterbot New news from newglusterbugs: [Bug 1140844] Read/write speed on a dispersed volume is poor <https://bugzilla.redhat.com/show_bug.cgi?id=1140844> || [Bug 1140845] Current implementation depends on Intel's SSE2 extensions <https://bugzilla.redhat.com/show_bug.cgi?id=1140845> || [Bug 1140846] Random crashes while writing to a dispersed volume <https://bugzilla.redhat.com/show_bug.cgi?id=1140846> || [Bug 1140847] ec tests fail on NetBSD <https://b
20:24 bazzz toordog-work: running the glusterfs as a virtual appliance?
20:25 toordog-work or if the oeverhead of managing the glusterfs and nfs is not much of a problem, run it on all server multi path for redundnacy
20:25 theron joined #gluster
20:25 toordog-work that could do
20:26 toordog-work a bit of overhead but you could easily do a direct access to the hardware for that virtual appliance and save the overhead.
20:26 toordog-work your server sure support it
20:26 bazzz Hmmm...
20:26 toordog-work even if you wrote to a vmfs for your gluster, the overhead would not be that high if performance is not the main focus
20:26 theron joined #gluster
20:26 bazzz So I would be running basically the storage for my VMs in VMs themselves?
20:27 toordog-work kind of similar to vmware virtual storage
20:27 bazzz which is f***ckin expensive btw.
20:27 andreask joined #gluster
20:27 toordog-work do you run a vmware cluster? *vmotion, drs ...
20:27 bazzz yes, that is planned
20:27 bazzz not yet
20:27 toordog-work arrf ok
20:27 bazzz but planned
20:27 toordog-work that could be complicated then
20:28 toordog-work you would have egg chicken issue in that scenario
20:28 toordog-work how do you vmotion a vm that is your storage entry point ;)
20:28 bazzz plan is to use 2/3 of resources of each server so that we can move any vm/restart any vm on the remaining hardware
20:28 toordog-work ok
20:28 toordog-work well anyway in that case glusterfs would not be fit
20:29 toordog-work you need a cluster filesystem
20:29 toordog-work concurrent access
20:29 toordog-work vmfs is a cluster filesystem with drs and vmotion
20:29 bazzz can I vmotion into a different datastore?
20:29 toordog-work nope
20:29 bazzz hm
20:30 toordog-work they must share the same datastore
20:30 toordog-work but you can do migration or something even live
20:30 toordog-work been a while i checked that
20:30 toordog-work not even sure how vmware 5.5 does now
20:30 bazzz but I could compensate a hardware failure in the VMware physical host with GlusterFS backend?
20:31 bazzz then no conucurrent access
20:31 bazzz right?
20:31 toordog-work need to think it a little
20:32 bazzz Not having VMotion live _might_ be okay (would need to be discussed with the team)
20:32 toordog-work to works with vmotion, the servers must share the lock system of the filesystem
20:33 toordog-work to avoid both trying to change files at the same time.
20:34 bazzz We're talking live-migration from one physical VMware host to another one, right?
20:34 ThatGraemeGuy joined #gluster
20:35 chirino joined #gluster
20:35 toordog-work not using vmfs you would lose these feature
20:35 toordog-work http://www.vmware.com/products/vsphere/features/vmfs
20:35 glusterbot Title: VMware Virtual Machine File System (VMFS), Shared Storage | United States (at www.vmware.com)
20:36 aronwp joined #gluster
20:36 bazzz toordog-work: I would generally lose these features with NFS, I guess.
20:37 dtrainor joined #gluster
20:37 toordog-work dunno, maybe not
20:37 toordog-work never tried
20:38 toordog-work if you don't need vmotion and you can move manually a vm to another server ... glusterfs could do fine
20:38 toordog-work otherwise you might be looking into zfs with vmfs on top
20:38 toordog-work i think zfs has a way to do replication to a second storage server
20:39 toordog-work and you could use iscsi
20:39 toordog-work semiosis is there people using kvm, xen, with glusterfs and be able to migrate vm from one host to another live?
20:39 bazzz Oook. Need to sort out the options :D
20:40 bazzz Good discussion on VMFS (Fibrechannel/VFS) here: http://www.reddit.com/r/vmware/comments/2avtyb/vmfs_via_8_gb_fibrechannel_or_nfs_via_10gb/
20:40 glusterbot Title: VMFS via 8 Gb FibreChannel or NFS via 10Gb Ethernet? : vmware (at www.reddit.com)
20:40 semiosis toordog-work: idk. i use clouds, not build them
20:40 bazzz :D
20:40 toordog-work bazzz i would rather go FC vs iSCSI than FC vs NFS
20:41 toordog-work hehehe i see semiosis, I think that emphasis how glusterfs is more for cloud env *apps level than infra level8
20:41 bazzz that's the tweo options he had in that thread...
20:41 toordog-work NFS is not reliable
20:41 semiosis toordog-work: hardly.  i'm just one person, not representative of much
20:42 toordog-work semiosis my feeling about glusterfs was that then.  And you confirmed it with your own usage of it.
20:43 bazzz NetApp seems to recommend NFS for general VMWare use cases. Not into NetApp though...
20:44 bazzz But you def. lose VMotion then
20:47 JoeJulian I build clouds, but I use all-linux solutions.
20:48 JoeJulian I find that things like vmware or proxmox make way too much work out of trying to work around their limitations.
20:49 JoeJulian Or the xen one that I can't think of off the top of my head.
20:50 JoeJulian kvm directly supports libgfapi or librbd eliminating a lot of overhead and is, imho, the most efficient in their use of clustered storage.
20:50 clyons joined #gluster
20:51 semiosis can it do live migration?
20:51 JoeJulian but of course.
20:51 semiosis \o/
20:51 JoeJulian We use that ability for evacuating compute nodes for upgrades.
20:53 toordog-work JoeJulian do you use something on top of kvm to manage all the server?
20:53 toordog-work bazzz netapps suggest nfs becaue they are a NAS not a SAN
20:54 JoeJulian openstack
20:55 toordog-work interesting article about storage in the cloud era : http://www.wwpi.com/index.php?option=com_content&amp;id=8644:the-future-of-cloud-storage-is-nas&amp;Itemid=2701018
20:55 glusterbot Title: The Future of Cloud Storage is NAS (at www.wwpi.com)
20:56 JoeJulian A.B. Founded Gluster btw,,,
20:56 toordog-work i need to put my hand in openstack
20:56 toordog-work yep :)
20:56 toordog-work i noticed in his signature at the bottom
20:56 JoeJulian Nice guy.
20:56 toordog-work at the same time, glusterfs cannot be really use in a cluster environment.
20:57 toordog-work or i missed some feature that would support it
20:57 JoeJulian It's clustered storage by definition.
20:57 toordog-work JoeJulian you are participating in the developmetn of gluster as well?
20:57 toordog-work distributed storage
20:57 toordog-work cluster require concurrent access management
20:57 JoeJulian I'm not a developer, I've just been using it for a long time.
20:57 toordog-work ok
20:58 JoeJulian Yes. That's what posix locking is for.
20:58 JoeJulian That's the standard for concurrent access management.
20:59 toordog-work I will have to read more about it then.  All i read so far was pointing into that weakness in a cluster environment.  no global locking system.
21:00 JoeJulian I don't know what you're reading, but either you're misinterpreting something, or they're clueless.
21:00 theron joined #gluster
21:00 toordog-work could be in a particular context like via a NFS client instead of gluster client
21:01 JoeJulian even via nfs
21:01 toordog-work anyway, i've been studying glusterfs only for a week or 2
21:01 JoeJulian Cool.
21:02 toordog-work so in that case, bazzz wouldn't have a problem to run vms directly on it with vmotion
21:02 toordog-work *if vmware can support it with another filesystem than vmfs
21:02 JoeJulian Should not, assuming vmware does things correctly.
21:03 toordog-work I really wonder how glusterfs achieve global lock management without clvmd, corosync or redhat cluster
21:03 JoeJulian Since VMware is trying desperately to remain relevant, there's probably a pretty good chance that it would work.
21:03 toordog-work if they want to survive in the cloud infrastructure for enterprise, they will have to support these kind of filesystem anyway
21:04 bazzz If I had two backend storage servers for GlusterFS, could is configure them in an active-passive setup? Locking would then only happen on one machine I guess.
21:04 JoeJulian I'm not entirely sure, but you'd probably have to read the code for the locks translator to figure that out.
21:04 toordog-work or they will try to build their own in house and they might even be able to find client to buy it
21:04 toordog-work I'm adding it to my list of test to do
21:05 JoeJulian bazzz: no, you access your storage THROUGH gluster. That's why you have consistency.
21:05 toordog-work 5 more hour to generate 5 millions files on my volume and i can start my test
21:05 JoeJulian toordog-work: You creating files from a single client?
21:06 bazzz JoeJulian: via gluster-client that would mean I had IP connections to each gluster-node?
21:06 JoeJulian bazzz: via the fuse client, you would. via nfs you just have the one single nfs ip.
21:07 bazzz JoeJulian: that's what I meant. And if my IP with the NFS mount fails then that IP is made availble on another gluster node?
21:07 JoeJulian You have to handle the floating ip outside of gluster.
21:08 JoeJulian And that's where you're going to have issues.
21:08 bazzz ah, ok. that's where heartbeat or something like that comes into play?
21:08 JoeJulian s/going to/likely to/
21:08 glusterbot What JoeJulian meant to say was: And that's where you're likely to have issues.
21:09 bazzz JoeJulian: would that be worth testing in your opinion?
21:09 * JoeJulian hates single points of failure.
21:09 JoeJulian absolutely.
21:09 bazzz Or will I *likely* have issues?
21:09 * bazzz hates 'em too
21:10 toordog-work In a DR situation there are VMFS locks and resignatures that have to take place but for now I was just interested in performance
21:10 toordog-work JoeJulian yes i generate file from a python script with urandom
21:11 JoeJulian toordog-work: My thought was that if you could split up your file creation among multiple clients, it would go much faster.
21:12 toordog-work i could definitely try, but at the same tim that would leave me less ti mto chat here
21:12 toordog-work ;)
21:12 JoeJulian hehe
21:12 JoeJulian I wasn't sure if you were hourly or salary. ;)
21:13 toordog-work hahaha, i'm actually salary but hey where's the rush, i'm learning a lot chatting here too
21:13 JoeJulian I have too.
21:14 toordog-work JoeJulian with the clietn gluster, if one node fail and it was your mount gluster host for your volume, will it crash on the client?
21:14 toordog-work or the client will stay alive with the other node of the volume?
21:15 bazzz for me this is all new stuff... i've been doing software (java ee) clusters for years, never cared about the stuff below it though
21:16 JoeJulian ~mount server | toordog-work
21:16 glusterbot toordog-work: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
21:17 toordog-work nice, that answer one of the thing i was worry about
21:21 JoeJulian Oh, andreask! You're the one that left. I heard about that but didn't realize it was you. Are you able to say what your new gig is?
21:21 aronwp joined #gluster
21:22 toordog-work JoeJulian where are you working?
21:24 JoeJulian I'm a Principal Cloud Architect with IO.
21:24 JoeJulian http://www.io.com/solutions/enterprise-cloud/
21:24 glusterbot Title: IO Cloud Solutions – Enterprise Cloud (at www.io.com)
21:25 glusterbot New news from newglusterbugs: [Bug 1140861] A new xattr is needed to store ec parameters <https://bugzilla.redhat.com/show_bug.cgi?id=1140861> || [Bug 1140862] A new xattr is needed to store ec parameters <https://bugzilla.redhat.com/show_bug.cgi?id=1140862>
21:25 toordog-work are you looking for sysadmin ?
21:25 JoeJulian No hiring 'till the end of the year.
21:26 toordog-work awww i might be taken at that time
21:26 JoeJulian I should hope so.
21:26 JoeJulian Would suck to go through the end of the year without having been.
21:26 toordog-work the company look great from the website :)
21:27 toordog-work I'm not unemployed
21:27 toordog-work but looking for new challenge outside my area
21:27 JoeJulian It's a lot of fun, lot of challenge, and a great crew of people.
21:27 toordog-work a bit tired of staying at the same place
21:27 JoeJulian And I get to work from home full time.
21:30 toordog-work that's fun :)
21:34 bazzz toordog-work: if I undestand this (http://www.vmware.com/files/pdf/techpaper/VMware-NFS-Best-Practices-WP-EN-New.pdf) whitepaper correct, then VMotion (live migration) is supported on NFS
21:35 bazzz working from home is a two-sided-sword imho
21:35 toordog-work bazzz depend, in my case my wife is Indonesian and I'm canadian.  So working from home can mean i can work from Indonesia as well
21:36 toordog-work which is way better for our family life
21:36 bazzz sure
21:36 bazzz my wife was born some miles from where I was born :D
21:36 toordog-work actually my wife haven't been back to indonesia for the past 4 years because of work
21:36 toordog-work :)
21:36 toordog-work your both family are close?
21:36 bazzz within an hour by car
21:37 toordog-work great, that make thing probably way easier :)
21:37 bazzz yeah, my parents see our kids twice a week :D
21:37 bazzz which is really great
21:37 toordog-work wish I could do that too.  My mom live 9h from where i live
21:37 toordog-work and my wife family is about 40h in plane
21:38 bazzz good friend of my father... his daughter moved to Osaka, half around the globe. They see their grandkids in person at most once a year
21:38 bazzz my boys are into soccer. my father visits every soccergame of them. Which we all really like.
21:39 toordog-work I'm working on it.  My mon haven't seen my first child 12mth old yet.
21:39 toordog-work :)
21:39 toordog-work living too far apart
21:43 dtrainor joined #gluster
21:45 plarsen joined #gluster
21:55 aronwp joined #gluster
21:57 and` joined #gluster
21:57 and` joined #gluster
21:58 toordog-work bazzz http://blog.jgriffiths.org/?p=820
21:58 bazzz thanks
21:59 bazzz I'm currently searching if QNAP iSCSI luns can be replicated live & online to another qnap - which could be an option (although with short downtime), too
22:00 bazzz synology has that stuff vmware certified, but I don't really trust them
22:01 vu joined #gluster
22:02 toordog-work bazzz http://myitnotes.info/doku.php?id=en:jobs:linux_gluster_nfs_for_vmware
22:02 glusterbot Title: en:jobs:linux_gluster_nfs_for_vmware [IT Notes about: Juniper, Cisco, Checkpoint, FreeBSD, Linux, Windows, VmWare....] (at myitnotes.info)
22:02 toordog-work why not?
22:02 bazzz I've posted that link (see above) but yours from that blog seem to be better :D
22:02 toordog-work yes i saw the tag from glusterbot and realized it :)
22:02 bazzz why not?
22:03 toordog-work trust synology
22:03 bazzz did never like DSM and made some bad experiences with desktop-NAS some years agoi
22:04 vu joined #gluster
22:05 toordog-work ok
22:06 bazzz this is purely subjective
22:08 bazzz http://www.youtube.com/watch?v=OSPWTNtTl30
22:08 glusterbot Title: Synology & VMware 2013 - YouTube (at www.youtube.com)
22:10 bazzz but it looks nice...
22:18 Lee-- joined #gluster
22:21 bazzz I'm off. Bye & tanks!
22:31 theron joined #gluster
22:37 apscomp joined #gluster
22:39 _VerboEse joined #gluster
22:39 huleboer joined #gluster
22:40 diegows joined #gluster
22:51 MacWinner joined #gluster
22:54 aronwp joined #gluster
23:16 if-kenn joined #gluster
23:27 bennyturns joined #gluster
23:43 diegows joined #gluster
23:52 tdasilva joined #gluster
23:57 VerboEse joined #gluster
23:58 dtrainor joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary