Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 kam270 joined #gluster
00:11 kam270 joined #gluster
00:20 kam270 joined #gluster
00:28 kam270 joined #gluster
00:35 tokik joined #gluster
00:35 kam270 joined #gluster
00:44 kam270 joined #gluster
00:44 DV joined #gluster
00:49 avalys joined #gluster
00:49 gdubreui joined #gluster
00:52 kam270 joined #gluster
00:54 avalys Hi folks - I have a small cluster of 4 machines, 64 cores in total.  I am running a scientific simulation on them, which writes at perhaps 1 MB/s (total) to roughly 64 HDF5 files.  Each HDF5 file is written by only one process.  Writing to HDF5 involves a lot of reading the file metadata and random seeking within the file,  since we are actually writing to about 30 datasets inside each file.  I am hosting the output on a 4-node distributed
00:54 avalys gluster volume to provide a unified namespace for the (very rare) case when each process needs to read the other's files.  I am seeing somewhat lower performance than I expected.  I expected the write-behind cache to buffer each write, but it seems that the writes are being quickly flushed across the network regardless of what write-behind cache size I use (32 MB currently).  Anyone have any suggestions as to what to look at?  I am using glus
00:55 avalys 3.4.2 on ubuntu 12.04
00:56 bala joined #gluster
01:00 kam270 joined #gluster
01:09 kam270 joined #gluster
01:18 kam270 joined #gluster
01:26 kam270 joined #gluster
01:31 mattappe_ joined #gluster
01:34 kam270 joined #gluster
01:43 kam270 joined #gluster
01:43 harish_ joined #gluster
01:48 neurodrone__ joined #gluster
01:51 kam270 joined #gluster
01:51 badone__ joined #gluster
01:58 kam270 joined #gluster
02:06 haomaiw__ joined #gluster
02:06 kam270 joined #gluster
02:15 kam270 joined #gluster
02:17 neurodrone__ joined #gluster
02:22 kam270 joined #gluster
02:34 aravindavk joined #gluster
02:34 kam270 joined #gluster
02:34 kanagaraj joined #gluster
02:44 haomaiw__ joined #gluster
02:44 jporterfield joined #gluster
02:45 kam270 joined #gluster
02:48 bharata-rao joined #gluster
02:53 kam270 joined #gluster
02:56 neurodrone__ joined #gluster
03:01 kam270 joined #gluster
03:10 kam270 joined #gluster
03:13 ninkotech joined #gluster
03:17 kam270 joined #gluster
03:25 kam270 joined #gluster
03:30 neurodrone__ joined #gluster
03:34 gdubreui joined #gluster
03:40 kam270 joined #gluster
03:46 itisravi joined #gluster
03:47 kam270 joined #gluster
03:51 ninkotech__ joined #gluster
03:55 kam270 joined #gluster
03:57 shubhendu joined #gluster
04:01 mohankumar joined #gluster
04:02 ndarshan joined #gluster
04:03 kam270 joined #gluster
04:11 kam270 joined #gluster
04:16 deepakcs joined #gluster
04:19 kam270 joined #gluster
04:27 kam270 joined #gluster
04:29 RameshN joined #gluster
04:32 saurabh joined #gluster
04:32 hagarth joined #gluster
04:33 aravindavk joined #gluster
04:36 ira joined #gluster
04:37 kam270 joined #gluster
04:43 satheesh joined #gluster
04:46 kam270 joined #gluster
04:51 bala joined #gluster
04:51 vpshastry joined #gluster
04:55 kam270 joined #gluster
05:04 kam270 joined #gluster
05:08 pk1 joined #gluster
05:09 chandan_kumar joined #gluster
05:12 kam270 joined #gluster
05:19 kam270 joined #gluster
05:22 kdhananjay joined #gluster
05:23 ppai joined #gluster
05:26 spandit joined #gluster
05:28 kam270 joined #gluster
05:28 prasanth joined #gluster
05:28 rastar joined #gluster
05:34 rjoseph joined #gluster
05:35 kdhananjay joined #gluster
05:35 vkoppad joined #gluster
05:38 ravindran joined #gluster
05:42 kam270 joined #gluster
05:43 snehal joined #gluster
05:44 bala joined #gluster
05:46 ravindran left #gluster
05:51 kam270 joined #gluster
05:55 raghu joined #gluster
05:59 nshaikh joined #gluster
06:06 lalatenduM joined #gluster
06:10 pk1 joined #gluster
06:11 kdhananjay joined #gluster
06:12 ngoswami joined #gluster
06:14 navid__ joined #gluster
06:29 mohankumar joined #gluster
06:33 prasanth joined #gluster
06:33 rahulcs joined #gluster
06:39 rastar joined #gluster
06:39 Philambdo joined #gluster
06:40 shylesh joined #gluster
06:45 ajha joined #gluster
06:48 satheesh joined #gluster
06:48 satheesh joined #gluster
06:51 mohankumar joined #gluster
06:57 psharma joined #gluster
07:01 vimal joined #gluster
07:02 kanagaraj joined #gluster
07:03 tokik joined #gluster
07:04 jporterfield joined #gluster
07:07 benjamin_____ joined #gluster
07:08 cjanbanan joined #gluster
07:13 solid_liq left #gluster
07:13 gdubreui joined #gluster
07:17 meghanam joined #gluster
07:17 meghanam_ joined #gluster
07:18 gdubreui joined #gluster
07:20 rahulcs joined #gluster
07:28 jtux joined #gluster
07:35 mattapperson joined #gluster
07:49 stigchristian joined #gluster
07:49 gdubreui joined #gluster
07:58 rastar joined #gluster
08:00 andreask joined #gluster
08:00 rahulcs joined #gluster
08:01 rgustafs joined #gluster
08:03 kam270 joined #gluster
08:04 eseyman joined #gluster
08:04 latha joined #gluster
08:15 ctria joined #gluster
08:16 pk1 left #gluster
08:18 kam270 joined #gluster
08:20 rahulcs joined #gluster
08:20 yinyin joined #gluster
08:21 doekia Is there any good munin plugins for 3.4.2?
08:27 kam270 joined #gluster
08:27 keytab joined #gluster
08:28 franc joined #gluster
08:35 kam270 joined #gluster
08:37 cjanbanan joined #gluster
08:39 doekia joined #gluster
08:42 ravindran joined #gluster
08:44 kam270 joined #gluster
08:53 kam270 joined #gluster
09:02 kam270 joined #gluster
09:02 bazzles joined #gluster
09:05 X3NQ joined #gluster
09:06 ravindran joined #gluster
09:11 kam270 joined #gluster
09:12 Norky joined #gluster
09:14 calum_ joined #gluster
09:14 DV joined #gluster
09:19 kam270 joined #gluster
09:20 hagarth joined #gluster
09:24 lalatenduM joined #gluster
09:24 rahulcs joined #gluster
09:27 kam270 joined #gluster
09:31 rahulcs joined #gluster
09:34 chandan_kumar hello
09:34 glusterbot chandan_kumar: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
09:34 glusterbot answer.
09:35 chandan_kumar i have followed the following guide to sett up glusterfs environment
09:35 chandan_kumar http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
09:35 glusterbot Title: QuickStart - GlusterDocumentation (at www.gluster.org)
09:36 kam270 joined #gluster
09:36 chandan_kumar at step 4 while i was configureing trusted pool
09:36 chandan_kumar i am getting following error : peer probe: failed: Probe returned with unknown errno 107
09:37 lalatenduM chandan_kumar, which distribution u r using? and is selinux is in disabled or permissive mode ?
09:37 chandan_kumar i have checked glusterd service, it is running
09:37 chandan_kumar lalatenduM, i am on fedora 20
09:38 chandan_kumar lalatenduM, selinux is enforcing
09:38 lalatenduM chandan_kumar, also you should check the iptables is blocking anything , if it is just a test vm , you can flush all iptable rules i.e. "iptables -F"
09:38 morsik joined #gluster
09:38 chandan_kumar lalatenduM, it is a test vm and i have trying glusterfs integration with openstack swift
09:39 morsik hello there
09:39 morsik anyone knows when 3.5 will be released?
09:40 lalatenduM chandan_kumar, ok , i will change the quick start guide with selinux and iptables info, 5 min
09:40 lalatenduM morsik, hagarth will know it
09:41 lalatenduM morse, I meant hagarth might have information about it
09:41 chandan_kumar lalatenduM, it worked thanks :)
09:41 lalatenduM chandan_kumar, cool :)
09:42 rastar joined #gluster
09:42 aravindavk joined #gluster
09:50 lalatenduM chandan_kumar, i have edited the quick start guide to include information about SELinux and iptables i.e. step4
09:53 kam270 joined #gluster
09:56 lalatenduM s/morse/morsik/
09:56 glusterbot What lalatenduM meant to say was: morsik, I meant hagarth might have information about it
09:58 morsik lol.
10:02 kam270 joined #gluster
10:06 chandan_kumar lalatenduM, thanks :)
10:08 Pavid7 joined #gluster
10:09 smithyuk1 joined #gluster
10:10 yinyin joined #gluster
10:10 aravindavk joined #gluster
10:10 dusmant joined #gluster
10:25 rahulcs joined #gluster
10:31 DV joined #gluster
10:32 aravindavk joined #gluster
10:34 jtux joined #gluster
10:35 kam270 joined #gluster
10:42 kam270 joined #gluster
10:48 T0aD joined #gluster
10:51 kam270 joined #gluster
11:00 kam270 joined #gluster
11:07 tdasilva left #gluster
11:08 kam270 joined #gluster
11:15 vkoppad joined #gluster
11:17 kam270 joined #gluster
11:25 rastar joined #gluster
11:26 kam270 joined #gluster
11:33 mbukatov joined #gluster
11:35 kam270 joined #gluster
11:37 rahulcs joined #gluster
11:37 kshlm joined #gluster
11:37 aravindavk joined #gluster
11:38 vkoppad joined #gluster
11:45 kam270 joined #gluster
11:50 rfortier1 joined #gluster
11:50 B21956 joined #gluster
11:53 kam270 joined #gluster
11:55 rahulcs joined #gluster
11:55 pk1 joined #gluster
11:56 Pavid7 joined #gluster
11:56 smithyuk1 Hi, can anyone point me towards the up to date documentation on the distribute translator. I see this "Custom layouts possible with distribute translator." in 3.4 release notes but no corresponding documentation?
12:01 ctria joined #gluster
12:03 doekia anyone knows about a good munin plugins for 3.4.2?
12:04 diegows joined #gluster
12:05 rastar joined #gluster
12:06 edward2 joined #gluster
12:08 tokik joined #gluster
12:11 itisravi joined #gluster
12:14 rahulcs joined #gluster
12:16 pk1 joined #gluster
12:18 nullck joined #gluster
12:19 ppai joined #gluster
12:22 yinyin joined #gluster
12:23 dusmant joined #gluster
12:29 neurodrone joined #gluster
12:29 neurodrone joined #gluster
12:35 bfoster joined #gluster
12:41 tokik joined #gluster
12:43 hybrid512 joined #gluster
12:44 hybrid512 joined #gluster
12:49 qdk_ joined #gluster
12:51 chirino joined #gluster
12:56 rfortier joined #gluster
12:58 benjamin_____ joined #gluster
12:58 ppai joined #gluster
13:04 shubhendu joined #gluster
13:08 rwheeler joined #gluster
13:08 hybrid512 joined #gluster
13:09 hybrid512 joined #gluster
13:11 chirino joined #gluster
13:17 ctria joined #gluster
13:19 bennyturns joined #gluster
13:20 rahulcs joined #gluster
13:23 japuzzo joined #gluster
13:29 jporterfield joined #gluster
13:30 jtux joined #gluster
13:36 sroy_ joined #gluster
13:37 smithyuk1 Hi, can anyone point me towards the up to date documentation on the distribute translator. I see this "Custom layouts possible with distribute translator." in 3.4 release notes but no corresponding documentation?
13:39 lalatenduM smithyuk1, I think it is a good question for gluster-devel mailing list
13:39 RayS joined #gluster
13:41 lalatenduM smithyuk1, http://www.gluster.org/interact/mailinglists/
13:41 lalatenduM @mailinglist
13:41 glusterbot lalatenduM: I do not know about 'mailinglist', but I do know about these similar topics: 'mailing list'
13:41 lalatenduM @bug
13:41 glusterbot lalatenduM: (bug <bug_id> [<bug_ids>]) -- Reports the details of the bugs with the listed ids to this channel. Accepts bug aliases as well as numeric ids. Your list can be separated by spaces, commas, and the word "and" if you want.
13:42 lalatenduM @fileabug
13:42 glusterbot lalatenduM: Please file a bug at http://goo.gl/UUuCq
13:43 calum_ joined #gluster
13:44 kshlm joined #gluster
13:45 siel joined #gluster
13:47 smithyuk1 lalatenduM: thanks
13:48 shyam joined #gluster
13:52 kkeithley1 joined #gluster
13:52 lmickh joined #gluster
13:57 jag3773 joined #gluster
13:57 theron joined #gluster
14:01 plarsen joined #gluster
14:04 pk1 left #gluster
14:07 rfortier joined #gluster
14:11 rahulcs joined #gluster
14:11 tokik joined #gluster
14:14 harish_ joined #gluster
14:15 ndk joined #gluster
14:18 jobewan joined #gluster
14:19 jobewan joined #gluster
14:21 doekia anyone knows about a good munin plugins for 3.4.2?
14:32 RayS joined #gluster
14:40 andreask joined #gluster
14:46 chirino joined #gluster
14:49 chandan_kumar Hi, is there any docs where i can integrate gluster fs as a backend storage with swift?
14:49 ultrabizweb joined #gluster
14:51 sijis left #gluster
14:54 benjamin_ joined #gluster
14:55 sputnik1_ joined #gluster
14:59 bugs_ joined #gluster
15:01 rfortier1 joined #gluster
15:12 rfortier joined #gluster
15:13 daMaestro joined #gluster
15:14 bennyturns joined #gluster
15:16 nueces joined #gluster
15:16 gmcwhistler joined #gluster
15:28 kampnerj joined #gluster
15:33 nathharp joined #gluster
15:40 ctria joined #gluster
15:48 nueces joined #gluster
15:48 sputnik1_ joined #gluster
15:48 nueces left #gluster
15:49 seapasulli joined #gluster
15:52 zaitcev joined #gluster
15:53 kshlm joined #gluster
15:54 zerick joined #gluster
15:57 jiffe98 rsyncing 20gb of web files since about 5pm yesterday, so about 18 hours so far :\
15:57 jiffe98 works well once copied over, but the initial writes take forever
15:59 mohankumar joined #gluster
16:03 jiffe98 I'm seeing a lot of Remote I/O error (121) during the renames too
16:05 semiosis use -inplace when rsyncing into glusterfs
16:08 nathharp left #gluster
16:08 jiffe98 semiosis: why is that necessary?
16:08 jiffe98 or what does that help
16:09 kaptk2 joined #gluster
16:10 semiosis see man rsync
16:10 mattapperson joined #gluster
16:10 semiosis it prevents rsync from doing renames on the files
16:10 jiffe98 but why are you saying I should do that with rsyncing to gluster?  Just to prevent additional operations?
16:11 T0aD who knows!
16:13 jiffe98 don't get me wrong, minimizing unnecessary operations is always good, just wondering if there was something else to it
16:20 mattappe_ joined #gluster
16:21 yinyin joined #gluster
16:22 lalatenduM joined #gluster
16:30 Mo_ joined #gluster
16:31 rfortier joined #gluster
16:32 jag3773 joined #gluster
16:36 ctria joined #gluster
16:37 Slash joined #gluster
16:41 chirino_m joined #gluster
16:45 sjoeboo joined #gluster
16:45 ndk joined #gluster
16:49 Matthaeus joined #gluster
16:50 gork4life joined #gluster
16:51 gork4life I need help with gluster I'm trying to do failover with gluster using ctdb
16:52 gork4life when I unplug an node it just hangs there for about ten minutes and never really comes back
17:01 Elico joined #gluster
17:04 stickyboy joined #gluster
17:09 lpabon joined #gluster
17:13 hagarth joined #gluster
17:21 sjoeboo joined #gluster
17:27 nage joined #gluster
17:27 nage joined #gluster
17:30 elyograg JoeJulian: ping.  problem I put up on the mailing list on Saturday (3/8) evening.  6:45 PM your time according to my mail archive folder.  not sure if that will be affected by daylight savings happening between then and now.
17:48 kkeithley1 joined #gluster
17:48 _dist joined #gluster
17:50 B21956 joined #gluster
18:02 cfeller joined #gluster
18:02 aurigus joined #gluster
18:05 Pavid7 joined #gluster
18:14 cjh973 gluster guys, anyone have a code example of how to use the readdirplus_r function in the gluster api?  i thought it was supposed to get you a variable number of entries back on a directory but you don't seem to pass in the number you want.  just a struct *stat which doesn't make sense to me
18:15 rotbeard joined #gluster
18:15 semiosis cjh973: https://github.com/semiosis/libgfapi-jni/blob​/master/libgfapi-jni/src/test/java/com/peirce​an/libgfapi_jni/internal/GLFSTest.java#L271
18:15 glusterbot Title: libgfapi-jni/libgfapi-jni/src/test/java/com/​peircean/libgfapi_jni/internal/GLFSTest.java at master · semiosis/libgfapi-jni · GitHub (at github.com)
18:16 semiosis cjh973: best i can figure the readdir methods work like a list iterator.  give it the current item and a place to store the next item, and it fills the place with the next item
18:20 elyograg is there anyone here who knows how to find/fix problems in the .glusterfs structure?
18:22 elyograg I have found some problems with bad symlinks.  need to know how to fix them and how to locate any additional problem.
18:23 elyograg if I can learn how the problem appeared at all, that would be helpful as well.
18:29 YazzY joined #gluster
18:29 YazzY joined #gluster
18:32 elyograg http://apaste.info/m03v
18:34 elyograg i guess glusterbot doesn't react to the apache paste bucket. :)
18:39 cjh973 semiosis: thanks :)
18:39 elyograg is there an effective limit on directory depth with gluster, at least with respect to parts of the system that utilize .glusterfs?
18:39 cjh973 semiosis: i got readdir_r working just fine.  I was just confused where the stat pointer comes in with readdirplus
18:40 diegows joined #gluster
18:48 kmai007 joined #gluster
18:56 kmai007 can anybody decipher this client message for me?
18:56 kmai007 [2014-03-10 18:56:16.843930] W [fuse-bridge.c:462:fuse_entry_cbk] 0-glusterfs-fuse: 44105900: LOOKUP() /admin/htdocs/coldfusion/cfsched/<?xml version="1.0" encoding="UTF-8"?>
18:56 kmai007 <!-- File:               debug_settings.xml                               -->
18:56 kmai007 <!-- Author:             Brian Detweiler (icom942)                        -->
18:56 kmai007 <!-- Description:        Config file contains debug settings for the      -->
18:56 kmai007 <!--                     entire Zephyr front end.                         -->
18:56 kmai007 <!-- Last Modified:      Brian Detweiler [icom942]
18:56 kmai007 -->
18:56 kmai007 <debug_settings>
18:56 kmai007 <page>
18:56 kmai007 <name>
18:56 kmai007 myreports
18:56 kmai007 < => -1 (File name too long)
18:56 kmai007 it just spews over and over in my client fuse logs
18:56 tdasilva joined #gluster
18:58 elyograg looks like something is trying to use the contents of an XML file as the actual filename.
18:59 elyograg it starts with this:
18:59 elyograg LOOKUP() /admin/htdocs/coldfusion/cfsched/<?xml version="1.0"
18:59 rpowell joined #gluster
19:00 kmai007 strange, there is a file named debug_settings.xml, but why it would display its content is wierd
19:01 elyograg I would guess that coldfusion is the source of the problem.  perhaps it has a mode where you can actually put XML data on a URL path, and there's something wrong in the coldfusion config that makes it try to use it as an actual file?
19:02 kmai007 agreed. i'll have to bounce that off the CF guys
19:03 kmai007 elyograg:thanks
19:08 jbrooks joined #gluster
19:09 philv76 joined #gluster
19:12 semiosis cjh973: i haven't been able to find a man page for readdirplus.  if you find out let me know!
19:14 jurrien_ joined #gluster
19:18 jag3773 joined #gluster
19:29 cjh973 semiosis: it seems to be an nfs thing
19:30 cjh973 i think i'll just stick to readdir_r for now.
19:30 mattappe_ joined #gluster
19:33 Leolo joined #gluster
19:34 Leolo hello
19:34 glusterbot Leolo: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:36 Leolo I have a question about disconnecting and recovery - say I had 3 servers (S1, S2, S3).  Say the link to S3 goes down for a while
19:36 Leolo when S3 comes back, what happens ?
19:37 Leolo it seems to be that clients talk to a specific server in the cluster.  If a client is talking to S3 and writes to S3, will this be prevented?  or will S1+S3 work out the differences with S3
19:37 JoeJulian Leolo: https://www.gluster.org/2010/07/vi​deo-how-gluster-self-healing-with-​automatic-file-replication-works/
19:37 JoeJulian ~mount server | Leolo
19:37 glusterbot Leolo: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
19:38 Leolo oh
19:38 Leolo nice
19:38 JoeJulian Tell me what you think of that video from your point of view.
19:38 lalatenduM JoeJulian, can we please teach glusterbot about @mailinglist :)
19:39 JoeJulian lalatenduM: feel free
19:39 lalatenduM I mean mailinglists , which will point to web where people can subscribe to gluster-users or gluster-devel
19:39 JoeJulian Can we teach mailing list about @irc? :D
19:39 JoeJulian @learn
19:39 glusterbot JoeJulian: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
19:40 lalatenduM JoeJulian, :) cool
19:40 JoeJulian Though, I'd prefer not to redirect people to the mailing list.
19:41 JoeJulian Unless that's followed with, "but let me try to help you."
19:41 elyograg JoeJulian: hello again.  have a little time to impart what knowledge you have regarding my .glusterfs issues?
19:41 lalatenduM JoeJulian, may be you are right. I came across situations when I dont know answer to a question and I tell people to send a mail on users or devel, depending upon the scope of the question
19:42 lalatenduM Not sure if that is right approach I follow
19:42 JoeJulian I think it's valid to offer an additional option, but I don't want anyone to feel they're getting "the runaround".
19:42 JoeJulian ... plus, if you try and even fail to help, you probably learned something in the process.
19:42 Leolo Joe - that video answered my question.  but brought up another one -
19:43 JoeJulian elyograg: We're just about to leave for lunch after I open a ticket with CLink. Let's work on it when I get back.
19:43 elyograg JoeJulian: ok.  thank you.
19:43 Leolo it seems self-healing only happens if a file gets accessed.   Which seems to imply a "wound" could exist for a while
19:43 Leolo is it possible to fix all files that need healing?
19:43 JoeJulian Leolo: That's old. There's now a self-heal daemon that will handle it.
19:43 Leolo ah ok
19:43 lalatenduM JoeJulian, what to do when you dont have the ans and time to find it>>?
19:44 Leolo joe - this deamon runs on the servers?
19:44 elyograg Leolo: yes.  As of 3.3, if I remember right (which I might not).
19:44 JoeJulian lalatenduM: Personally, I just pretend I'm not here. Hehe.
19:45 JoeJulian lalatenduM: Fair question though.
19:45 JoeJulian It's not hard to direct people where to go to help themselves. Maybe give clues what to google for, where the log files are, that kind of stuff. I try to lean more on helping people help themselves.
19:46 lalatenduM JoeJulian, yup I do that too sometime :), sometime I feel if I direct them towards mailing list atleast the person will not feel hopeless abt what to do next
19:46 Leolo furthur question - what about a nightmare scenerio?  the switch between S1+C1 and S2+C2 goes down.  C1 then writes to S1.  C2 then writes to S2.  Then they reconnect...
19:46 elyograg I do the 'pretending I'm not here' thing too. :)  when I am obviously here because I'm asking or answering questions, I will notify the interested party that I have no idea.
19:46 JoeJulian ~split-brain | Leolo
19:46 glusterbot Leolo: To heal split-brain in 3.3+, see http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ .
19:46 JoeJulian @quorum
19:46 glusterbot JoeJulian: http://community.gluster.org/q/how-do-i-use​-the-quorum-feature-to-prevent-split-brain/
19:46 JoeJulian @forget quorum
19:46 glusterbot JoeJulian: The operation succeeded.
19:47 JoeJulian Need new factoids for that one.
19:47 JoeJulian Leolo: There is two different quorum features to prevent split-brain.
19:47 elyograg I'm *really* hoping the quorum arbiter feature makes it to release, so I can use it with my replica 2 volumes.
19:47 JoeJulian s/ to / that can be used to /
19:47 glusterbot What JoeJulian meant to say was: Leolo: There is two different quorum features that can be used to prevent split-brain.
19:49 lalatenduM @learn
19:49 glusterbot lalatenduM: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
19:50 lalatenduM learn sambavfs http://lalatendumohanty.wordpress.com/2014​/02/11/using-glusterfs-with-samba-and-samb​a-vfs-plugin-for-glusterfs-on-fedora-20/
19:50 lalatenduM @sambavfs
19:50 glusterbot lalatenduM: I do not know about 'sambavfs', but I do know about these similar topics: 'samba'
19:50 JoeJulian lalatenduM: what about centos6?
19:50 JoeJulian lalatenduM: You forgot the "as"
19:51 Leolo centos6 is fedora 12
19:51 lalatenduM learn sambavfs as http://lalatendumohanty.wordpress.com/2014​/02/11/using-glusterfs-with-samba-and-samb​a-vfs-plugin-for-glusterfs-on-fedora-20/
19:51 lalatenduM JoeJulian, the same settings would work
19:51 elyograg on my own problem, my research seems to indicate that linux no longer has a MAXSYMLINKS limit, removed sometime before the CentOS 6 kernels.  glibc does, though -- and it seems to be eight.  Really hoping I don't have to recompile glibc.
19:51 JoeJulian Cool. That's my next big project.
19:51 lalatenduM @sambavfs
19:51 glusterbot lalatenduM: I do not know about 'sambavfs', but I do know about these similar topics: 'samba'
19:51 JoeJulian elyograg: I'm not even sure if that's really a problem.
19:52 Joe630 is there a best practice for making the gluster config high availability
19:52 J_Man joined #gluster
19:52 JoeJulian Joe630: Use replicated volumes and fuse mount.
19:52 lalatenduM JoeJulian, what am I doing wrong
19:53 rahulcs joined #gluster
19:53 hybrid512 joined #gluster
19:53 J_Man hello everyone!  is there anyone who's using gluster as a storage backend for samba shares and using the Samba acl_xattr vfs plugin?
19:53 Joe630 JoeJulian: what if i am using the nfs client
19:53 JoeJulian @learn sambavfs as http://lalatendumohanty.wordpress.com/2014​/02/11/using-glusterfs-with-samba-and-samb​a-vfs-plugin-for-glusterfs-on-fedora-20/
19:53 glusterbot JoeJulian: The operation succeeded.
19:53 JoeJulian lalatenduM: @ to let glusterbot know you're talking to it.
19:53 JoeJulian @sambavfs
19:53 glusterbot JoeJulian: http://lalatendumohanty.wordpress.com/2014​/02/11/using-glusterfs-with-samba-and-samb​a-vfs-plugin-for-glusterfs-on-fedora-20/
19:53 J_Man hmm...someone else asking same thing as I am? :)
19:54 Joe630 left #gluster
19:54 lalatenduM J_Man, :) your timing is perfect :)
19:54 J_Man not sure that's the same problem I am having...reading now
19:54 lalatenduM JoeJulian, thanks :)
19:55 J_Man I can't seem to set NT ACL's on files stored on gluster
19:55 lalatenduM J_Man, it would work also with " acl_xattr" i.e. NT ACL
19:55 J_Man vfs objects = acl_xattr
19:56 Leolo the examples seem to create bricks as xfs... is there an advantage over ext4?
19:56 JoeJulian Not any more
19:56 J_Man there's bugs between ext4 and gluster
19:56 J_Man or has that been fixed?
19:57 kuprinv joined #gluster
19:57 JoeJulian There was an ext4 bug, but long since fixed.
19:57 elyograg JoeJulian: for after your lunch: no visible problems with the mount.  too many levels on some symlinks in .glusterfs, though.  errors in nfs.log that seem to be related.  I'm scared to death of starting another rebalance and learning that these problems are going to trash my filesystem in some new way.
19:58 theron joined #gluster
19:58 lalatenduM J_Man, I think you have to put  acl_xattr = yes for the samba share, i have used NT ACL too in a test environment , but can't remember it now
19:59 lalatenduM J_Man, it is 1:30 AM for me know , may be that the reason my memory is not helping me
19:59 J_Man lemme put up a pastebin...one sec
20:00 Leolo quorum requires an odd number of servers?
20:00 cjanbanan joined #gluster
20:00 lalatenduM JoeJulian, J_Man I thought XFS gives better performance over ext4 , and thats why Red Hat Storage server recommends xfs
20:01 semiosis Leolo: no
20:01 semiosis Leolo: but it's a good idea to have replica 3 with quorum
20:01 JoeJulian The lead file system developer for red hat likes the code base better for xfs. He feels it's slimmer, more manageable, and less prone to error.
20:02 Leolo Joe - hmmm
20:02 lalatenduM JoeJulian, interesting :)
20:02 J_Man http://pastebin.com/kd6SQfG3
20:02 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:02 J_Man bleh
20:03 J_Man http://fpaste.org/84071/
20:03 glusterbot Title: #84071 Fedora Project Pastebin (at fpaste.org)
20:03 hybrid512 joined #gluster
20:03 J_Man that basically explains what I'm facing :)
20:04 elyograg If I had any idea how, I would update that PB message from glusterbot to say people should use the other URLs for *future* pastes.
20:04 lalatenduM J_Man, without "kernel share modes = No" for the share you might get this issue
20:04 Leolo btw, I'm at the "pie in the sky, if ever we create a cloud version of our product" research  stage
20:04 Leolo so I don't have concrete examples
20:05 J_Man lalatenduM, never heard or seen that option mentioned...reading up on it now
20:05 semiosis mmmm pie
20:05 semiosis any recommendations on pies for pi day?
20:06 lalatenduM J_Man, it has been discussed in gluster-users , I trying to get the email thread , so that I can pass it to you
20:06 elyograg semiosis: local place for me: http://thepie.com/
20:06 glusterbot Title: The Pie Pizzeria - People's Choice for Utah's Best Pizza for over 30 years! (at thepie.com)
20:07 semiosis savory pie had not crossed my mind
20:07 xoritor joined #gluster
20:07 semiosis thx for the tip
20:07 J_Man http://www.spinics.net/lists​/gluster-devel/msg11603.html
20:07 glusterbot Title: Re: [Gluster-users] RPMs for Samba 4.1.3 w/ Gluster VFS plug-in for RHEL, CentOS, etc., now available Gluster Development (at www.spinics.net)
20:07 J_Man that looks like a reference
20:08 xoritor anyone got any suggestions for me to check on why self-heal would go _very_ slow?
20:08 xoritor 4x1Gbit lacp bond on a vlan
20:08 xoritor gluster 3.4.2 fedora 19
20:08 semiosis xoritor: for my dataset, full alg is faster than diff (default).  i also reduced background heal threads to 2 fwiw
20:09 xoritor semiosis, cool thanks, i will look into that
20:09 lalatenduM J_Man, nope that is not, in the email thread I had given explanation of whay we need to use the settings in smb.conf
20:09 semiosis xoritor: see 'gluster volume set help' and also ,,(undocumented options)
20:09 glusterbot xoritor: Undocumented options for 3.4: http://www.gluster.org/community/documentat​ion/index.php/Documenting_the_undocumented
20:09 xoritor beautiful!
20:09 xoritor thanks again
20:09 andreask joined #gluster
20:10 J_Man weird thing...the references I'm finding are indicating that that option prevents writing to the share...but yet, I can write to the share, just can't set xattrs
20:11 J_Man also, I should probably mention that I'm not using the glusterfs vfs module right now, either
20:11 J_Man could that be my problem?
20:14 lalatenduM J_Man, oops I didn't notice that you are not using vfs module , as in you have mentioned vfs plugin in your 1st question :)
20:15 J_Man I didn't think I would need it, since my samba server is *NOT* one of the gluster servers as well
20:15 lalatenduM J_Man, you should use glusterfs vfs plugin , it will be faster most of the cases
20:16 J_Man vfs object = glusterfs right?
20:16 lalatenduM J_Man, hmm, yes
20:16 J_Man do you need it on the client only, or on the servers as well?
20:16 J_Man not too familiar with exactly what that does
20:17 J_Man and need to find one that works with my combination of gluster and samba :)
20:17 lalatenduM J_Man, basically with gluster vfs plugin, samba directly talks to gluster, it does not need a local mount in samba server
20:18 lalatenduM J_Man, np, lets concentrate on ur setup. lets forget vfs plugin:)
20:18 J_Man OK :)
20:18 lalatenduM J_Man, so you are using a fuse mount right?
20:18 J_Man basically, it seems to me that samba is having problems setting the xattr's on the gluster filesystem
20:19 J_Man yeah
20:19 J_Man gf-os01-ib:/gf-os on /gf-os type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072)
20:19 lalatenduM J_Man, have u mount it using -o acl
20:19 lalatenduM ?
20:19 J_Man tried that..worked neither with or without
20:19 J_Man but at command like I can do setfacl on the gluster filesystem successfully
20:19 J_Man command line
20:20 lalatenduM J_Man, are you using Active Directory or root to mount the samba share on the windows client?
20:20 J_Man just browsing to it
20:20 J_Man using my AD username
20:21 lalatenduM J_Man, ok, have added the samba server to ur AD also?
20:22 lalatenduM s/have/have u/
20:22 glusterbot What lalatenduM meant to say was: J_Man, ok, have u added the samba server to ur AD also?
20:22 J_Man yes
20:22 J_Man both the gluster servers and the samba server are joined to the AD domain, and winbind is set up
20:22 J_Man and functioning properly
20:23 rwheeler joined #gluster
20:23 lalatenduM J_Man, then do following, fuse mount with -o acl , i.e. mount -t glusterfs -o acl
20:23 J_Man on the samba client?
20:23 lalatenduM J_Man, on the samba server, all the below steps are for samab server
20:24 J_Man k stand by one
20:24 lalatenduM create a directory on the mount lets say "testsmb"
20:24 lalatenduM do "chgrp 'AD user group' /mnt/point/testsmb/"
20:25 J_Man by the way, is it normal that the "acl" option does not show up on the output of the "mount" command
20:25 lalatenduM AD user group can be replaced with a single user name also
20:25 J_Man I've been setting the share directory root.root with 777 perms
20:25 lalatenduM J_Man, to see if acl is set , do "ps aux | grep acl"
20:26 J_Man go tit
20:26 J_Man got it
20:26 lalatenduM J_Man, you can't give root permision as you are using AD
20:26 J_Man that's how I did it on my local filesystem share that works *shrug*
20:26 J_Man OK
20:26 lalatenduM now "chmod 770 /mnt/point/testsmb/"
20:27 Elico how to use the repository of openSUSE??
20:27 lalatenduM J_Man, not sure why local worked :), glusterfs is a distributed fs
20:27 J_Man [root@data-test01 gf-os]# mkdir testsmb
20:27 J_Man [root@data-test01 gf-os]# chown "GAMMA+jutley"."GAMMA+domain users" testsmb
20:27 J_Man [root@data-test01 gf-os]# chmod 777 testsmb
20:28 J_Man gf-os is my gluster exported dir
20:28 lalatenduM do a "ls -l" to see if grp is set correctly
20:28 J_Man drwxrwxrwx 2 GAMMA+jutley GAMMA+domain users            6 Mar 10 16:26 testsmb
20:28 lalatenduM cool
20:29 lalatenduM go to client and check if you can set permission
20:29 J_Man need to export the new dir..one sec
20:29 lalatenduM ok
20:31 J_Man "Unable to save permission changes on {filename}.  The request is not supported."
20:32 J_Man Gluster log file on the client shows:
20:32 J_Man [2014-03-10 20:31:08.756183] W [client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-gf-os-client-1: remote operation failed: No data available
20:33 badone_ joined #gluster
20:34 lalatenduM J_Man, you are using Windows client right?
20:34 J_Man yeah, windows 7 pro client
20:35 Leolo question about "replica" - if I have 10 servers in the pool, but create a volume with "replica 3" that means a given file gets replicated to 3 of those 10 servers?
20:35 Leolo or is the unit "bricks"?
20:35 J_Man lalatenduM, what about this option in glusterfs: --selinux              Enable SELinux label (extened attributes) support
20:35 J_Man on inodes
20:36 lalatenduM J_Man, what getenforce returns on your gluster nodes
20:36 J_Man SELinux is disabled
20:37 elyograg Leolo: replica 3 means that for each slice (making up words here) of a distributed volume, there are three replicas.  you need a brick for each replica.
20:37 J_Man but could that option not being present be disabling setting xattr's?
20:37 J_Man since selinux also uses user_xattr's?
20:38 jbrooks joined #gluster
20:38 Leolo so volume create test1 replica3 brick1 brick2 brick3 brick4 would be a syntax error?
20:38 JoeJulian @replica
20:38 glusterbot JoeJulian: Please see http://joejulian.name/blog/glust​erfs-replication-dos-and-donts/ for replication guidelines.
20:38 kmai007 JoeJulian:
20:38 lalatenduM J_Man, disabling is the right thing to do, as glusterfs does not support it. not sure why it is not working for you, it worked fine for me it some point of time, I have moved to vfs plugin now
20:38 JoeJulian @replica count
20:38 glusterbot JoeJulian: glusterfs requires the number of bricks to be a mutliple of the replica count; the number of servers is strongly recommended to also be a multiple of the replica count, however this is not an absolute requirement
20:38 chirino joined #gluster
20:39 JoeJulian hmm, where is that factoid...
20:39 Leolo right
20:39 JoeJulian @brick order
20:39 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
20:39 badone_ joined #gluster
20:40 J_Man THAT DID IT!
20:41 J_Man mounted glusterfs with -o acl,selinux
20:41 kmai007 in my fuse client logs i'm seeing this http://fpaste.org/84087/44840591/  i'm not sure what it means by too long?
20:41 glusterbot Title: #84087 Fedora Project Pastebin (at fpaste.org)
20:41 JoeJulian kmai007: You have some "open" that's using xml for the filename.
20:43 lalatenduM J_Man, interesting which distribution and what glusterfs version you are using
20:44 J_Man CentOS 6.5, gluster 3.4.1
20:44 J_Man (a little behind, but not much)
20:46 chirino_m joined #gluster
20:46 lalatenduM J_Man, cool, I learned something new :)
20:46 J_Man I remember seeing something about that on Friday in one of the mailing lists
20:47 chirino_ joined #gluster
20:47 J_Man lalatenduM, thank you very much for helping!  You just saved my butt!
20:48 chirino joined #gluster
20:48 lalatenduM J_Man, :) good to know
20:50 lalatenduM J_Man, I am wondering how I missed the mail:(.if possible send me the linl
20:50 chirino_m joined #gluster
20:50 lalatenduM s/linl/link/
20:50 glusterbot What lalatenduM meant to say was: J_Man, I am wondering how I missed the mail:(.if possible send me the link
20:51 J_Man I'll see if I can find it again
20:51 daMaestro joined #gluster
20:51 chirino_ joined #gluster
20:53 chirino joined #gluster
20:54 jmarley joined #gluster
20:54 jmarley joined #gluster
20:54 J_Man lalatenduM, https://groups.google.com/forum/#!topic/linux.samba/3W85B8ZoJWg
20:54 glusterbot Title: Google Groups (at groups.google.com)
20:54 J_Man look at the one from "wil" on 9/24/13
20:55 Leolo how does one add a brick to a volume?
20:56 Leolo oh wait, add-brick
20:57 lalatenduM J_Man, thanks :) I am in samba list but missed it
20:58 chirino_m joined #gluster
20:58 rpowell1 joined #gluster
21:01 chirino_ joined #gluster
21:04 XpineX joined #gluster
21:08 chirino joined #gluster
21:11 chirino_m joined #gluster
21:20 tdasilva left #gluster
21:31 chirino joined #gluster
21:32 chirino_ joined #gluster
21:35 chirino_m joined #gluster
21:39 chirino joined #gluster
21:45 chirino_m joined #gluster
21:47 chirino_ joined #gluster
21:49 chirino joined #gluster
21:51 ctria joined #gluster
21:51 theron joined #gluster
21:54 mattappe_ joined #gluster
21:54 chirino_m joined #gluster
21:57 kmai007 JoeJulian: how would i get the brick log files to write to a file that has been deleted without rebooting?
21:57 chirino joined #gluster
21:57 kmai007 glusterfs  3323      root    4w      REG              253,9   85569561        279 /var/log/glusterfs/bricks/expo​rt-content-static.log-20140302 (deleted)
21:57 Gizzard joined #gluster
21:57 kmai007 thats what i get when i search for lsof
21:58 chirino_m joined #gluster
21:59 JoeJulian kmai007: I assume you mean to *stop* writing to... kill -HUP
22:01 kmai007 JoeJulian: correct, but it makes me nervous
22:01 kmai007 b/c when i do an lsof
22:01 JoeJulian kmai007: btw... that's why we like to use the "copytruncate" methos for log rotation.
22:01 JoeJulian s/methos/method/
22:01 glusterbot What JoeJulian meant to say was: kmai007: btw... that's why we like to use the "copytruncate" method for log rotation.
22:01 JoeJulian Don't be nervous. That's how it's supposed to be done.
22:02 JoeJulian kill -HUP 3323
22:02 JoeJulian I suppose you could also use the cli's logrotate command.
22:04 kmai007 tried the cli and it didn't help
22:05 kmai007 i also see
22:05 kmai007 [root@omhq1140 ~]# lsof|grep delete
22:05 kmai007 glusterfs  3287      root  txt       REG             253,10     73096     134689 /usr/sbin/glusterfsd (deleted)
22:05 kmai007 that pid is an active volume
22:05 kmai007 i wonder if its just best to reboot the brick
22:07 seapasulli left #gluster
22:10 gork4life left #gluster
22:12 Leolo gluster is kinda, sorta like raid0 and raid1
22:14 Leolo which leads to my next question - is it useful/pertinent to use software raid1 as a brick?
22:15 _dist Leolo: I wouldn't compare gluster to raid, I agree it has mirror (replicate) and stripe (distribute) storage capabilities that are comparable, but I'd recommend your bricks are on top of raid. I haven't talked to anyone who used mdadm for it, personally I use zfs, most people I know use raid5/6 hardware
22:16 _dist the reason I make that recommendation is because if a brick in a replicate volume "dies" because the disk dies, it'll usually be more a hassle to fix than if a disk died in a raid1 volume (for equal comparison)
22:17 Leolo right
22:18 Leolo oh right, you add bricks to volumes, which will automatically expand
22:19 lmickh joined #gluster
22:22 _dist Leolo: for distribute that's right, for replicate they'd automatically self heal (like mirror raid)
22:23 velladecin left #gluster
22:24 Leolo yeah, that's another question : if you have "replicate 3", you can't just add one new brick; it needs to be 3 at a time?
22:25 _dist Leolo: nope you can absolutely add another when you do your add-brick just specify you change say adding one replica 4 newbrick
22:25 _dist Leolo: unless you mean to add space not replicas
22:25 elyograg Leolo: correct.  unless you're changing your replica count as _dist just mentioned.
22:26 Leolo I mean to add space
22:26 _dist Leolo: in that case you could drop your replica from 3 to 2, and then have 2x2 instead of 3x3
22:26 Leolo of course
22:27 Leolo I'm reading about.  One blog post says to use as few replicas as you can get away with
22:27 Leolo (I'm paraphrasing)
22:27 elyograg I don't think gluster supports adding an incomplete replica set like linux software RAID does.
22:31 semiosis you can of course grow the underlying brick filesystems, without adding more bricks, to add capacity to your volume
22:31 Leolo semiosis - hadn't thought of that
22:32 Leolo does gluster automatically adjust to this?  does gluster even know the size of the underlying FS?
22:32 semiosis automagic
22:32 tdasilva joined #gluster
22:35 semiosis you can also have many bricks per server, so if you do need to expand to more servers (to handle more load) you can move the bricks off
22:35 Leolo the scenario I'm looking at is a "cloud" version of my software.  It's a document warehouse.  We'd have a "bunch" of commodity servers in a gluster with replicate 3.  If the gluster fills up, just add 3 more servers
22:35 semiosis imho add-brick (and the required rebalance) is too expensive, and should be avoided if possible by capacity planning
22:36 semiosis Leolo: imho, you add servers to add performance, not capacity
22:36 semiosis if your load remains constant, then just add more block storage to the existing servers
22:36 Leolo well, if they are 1U servers, you don't have a lot of space for drives :-)
22:36 Leolo semiosis : hmmm... interesting point
22:38 Leolo I'd like to avoid having partions.  we hypothetically go from 10 customers to 100 customers.  an ideal solution would allow us to just add servers as needed
22:38 semiosis yeah ideal, sure
22:38 semiosis but here in the real world :)
22:38 elyograg Dell R720xd ... twelve hotplug 3.5 inch drive bays in 2U.  Plus a pair of hotplug 2.5 inch drives in the back for the OS.
22:38 _dist Leolo: you could add new volumes instead worst case?
22:38 _dist storage is so cheap though, why not just go overboard to begin with
22:38 Leolo new volumes means "paritions"  as in you could have wasted space
22:39 semiosis Leolo: try add-brick & rebalance, see how it works for you.  people often report a rebalance on their prod volume takes *days* to complete
22:39 _dist Leolo: I mean if you get into a space crunch you could add a new gluster volume later, to avoid wasting space. Rather than trying to increase and existing one
22:40 Leolo I wrote the software coming from an ecommerce background.  I was thinking "couple 100k new documents a year".  In the Real World I have 2 clients that have >1million new documents a year.
22:40 Leolo semiosis : and during the rebalance, everything slows to a crawl.  Gotcha
22:40 _dist when we added a replicate brick to 1xDistribute to turn it in 2xReplicate for 200GB of data it took about 12 hours
22:40 semiosis ^^ not rebalance!
22:40 Leolo currently the software is installed on customer's hardware.  We are thinking of a cloud
22:41 _dist semiosis: fair enough :)
22:41 gdubreui joined #gluster
22:41 Leolo I was initialy thinking of doing replication at the application layer.  But gluster looks like I can avoid that.
22:42 Leolo does removing a server from a gluster equal a rebalance?
22:42 semiosis not sure
22:43 Leolo aha, you'd do a replace-brick
22:44 elyograg Leolo: removing an entire replica set is a rebalance.  I think you can remove a brick without any problems if you have at least one remaining replica, but as you just said, replace-brick is better.
22:44 Leolo R720xd is 2k
22:45 Leolo you can get 1U dell refurbs for 75$
22:45 Leolo wait, 12 x 3.5 in 2U?  is that physically possible?
22:45 elyograg not the way I specced it.  I think it's $3K for each server, plus twelve aftermarket 4TB enterprise-class hard drives, over $300 each.
22:45 Leolo :-)
22:46 elyograg 24, not 12.
22:46 Leolo you trust 4TB drives?
22:46 elyograg they are western digital raid edition - enterprise class.
22:46 Leolo fair enough :-)
22:46 _dist Leolo: I prefer self builds with supermicro, I like to stay away from dell/lenovo/hp cause they do stupid stuff like make you buy HDs just to get the hot-swap trays
22:47 elyograg _dist: we buy those on ebay. :)
22:47 elyograg we buy it with one SAS drive to make sure we get the right kind of controller.
22:47 Leolo I've had bad experiences with supermicro - hotswap trays that won't latch
22:47 _dist elyograg: honestly I'm using 3TB reds, in a raid array where we'd need 4 drive failures for data loss. I haven't tried the REs yet, they aren't too much more
22:48 Leolo my understanding is that if you are doing RAID, you should be getting reds, not REs
22:48 _dist Leolo: Lenovo for example, ships their chasis without trays (annoying) and charges $150 MIN (beacuse they force you to buy a drive). Supermicro sells 3.5 drive trays for $5/each.
22:48 Leolo ely - that's ~6k per server :-)
22:49 semiosis Leolo: if you're in the software business why run your own hardware?
22:49 elyograg the ebay drive tray knockoffs have been a bit of a crapshoot, but no major issues.
22:49 semiosis Leolo: regulations?
22:49 elyograg WD red is a 5400 RPM drive.  holy cow, slow.
22:49 Leolo semiosis - that's a good question.  Short answer is that I'm also a sysadmin.  Or maybe simply NIH.
22:49 semiosis Leolo: hah, ok. well fwiw gluster works pretty well in EC2
22:50 Leolo I write the software.  I also sysadmin the systems it is installed on
22:50 Leolo EC2 is in the USA.  I really want to keep my data in Canada.
22:50 semiosis there must be someone doing IaaS in canada
22:51 Leolo I'm currently at planning stages.  We'd have to do the math to see if colo + server costs would be less then renting
22:51 Leolo semiosis : you'd be surprised.
22:52 elyograg Leolo: actually more like $8K per server.  We could get MUCH cheaper drives, but they'd either be a lot slower, or I'd never trust them in an enterprise setting.
22:52 semiosis purpleidea: canadian IaaS? go!
22:53 Leolo though I'm not fully sure what you mean by IaaS.  Yes, I can rent virtual servers.  I can also hire providers to set up VM solutions.  But I don't think you can get something as flexible as EC2
22:53 semiosis infrastructure as a service.  sorry for the buzzword
22:54 Leolo 12*300+3000 + 2x200 = 7000 ( 300 = RE, 200 = SSD)
22:54 Leolo I know what IAAS is.  but like all buzzwords, it can mean different things to different people
22:54 elyograg I'm just using a 300GB SAS drive for the two in the back.  And we get those from Dell.
22:55 _dist elyograg: iirc reds are 7200, only reason I'd watch on using them for "hardware raid" would be if someone didn't config their failure time down to the controllers specs, also they don't have NCQ
22:55 Leolo which is what I always tell my associate when we talk about the cloud "to the customer, cloud just means 'always avaiable, someone else deals with the problems'"
22:55 elyograg a google search for 'western digital red' shows results that say 5400 RPM.
22:55 Leolo reds don't have NCQ?  what is this tomfoolery!
22:55 semiosis Leolo: i mean, make an API request to launch a VM, make another API request to attach X TB of block storage to it
22:56 _dist Leolo: sorry I meant the opposite haha
22:56 Leolo semiosis : OK.  I don't think that exists in Canada.
22:56 semiosis bummer :(
22:57 * semiosis afk
22:58 Leolo http://www.wdc.com/wdproducts/libr​ary/SpecSheet/ENG/2879-771442.pdf Rotational speed (RPM) IntelliPower
22:58 elyograg the actual specification says the rotation speed is 'intellipower' ... which tom's hardware seems to think is about 5400. :)
22:58 Leolo whatever that means
22:59 elyograg http://www.tomshardware.com/for​um/262538-32-what-intellipower
22:59 glusterbot Title: What is the "Intellipower"? - Hard Drives - Storage (at www.tomshardware.com)
22:59 _dist yeah I noticed that on their spec sheet http://www.wdc.com/wdproducts/libr​ary/SpecSheet/ENG/2879-771442.pdf
23:00 _dist based on the speeds I get out of them I just incorrectly assumed, looks like it's dynamic, probably so they turn run at 80C like blacks
23:00 Leolo WD pisses me off with their marketing speak.  I need to know exactly what they mean by "optimized for RAID"
23:01 _dist Leolo: only thing I can imagine is that they let you config (without special firmware) the drop out point (report back that they didn't find shit). But I mean reds are budget drives
23:02 Leolo there's that, yes.  Reds will tell the controler "I HAVE AN ERROR" sooner so the RAID can handle it, rather then try and try again.
23:02 _dist well, you can config that and tweak to as any ms as you like
23:02 _dist depending on the controller you might want that
23:03 elyograg so basically it's a desktop-class drive with enterprise-friendly firmware.  :)
23:03 _dist yeap, exactly. Cheap, somewhat relaible, which is ok as long as you know that when you buy them
23:05 _dist I've honestly only seen 1-2 fail in the past few years using them. Usually they don't hard fail, but they start to show up as high util in an iostat as a culpruit for an array being slow
23:05 Leolo http://www.silentpcreview.​com/article1285-page4.html - "Red's lower rotational speed which we confirmed as ~5,400 RPM via acoustic testing."
23:05 glusterbot Title: Western Digital Red 3TB & 1TB Hard Drives | silentpcreview.com (at www.silentpcreview.com)
23:07 _dist Leolo: cool, that doesn't have to mean they perform more slowly though. WD probably knew everyone would make assumptions without doing reserach if they just posted the 5,400
23:11 tdasilva left #gluster
23:12 velladecin joined #gluster
23:12 sputnik13 joined #gluster
23:14 velladecin From RedHat/Gluster docs I got that local FW (iptables) should be completely open for the servers in the Gluster cluster :) However, is it really so? Would it be sufficient to open just high ports? 1024 - 65535
23:14 Leolo dist - pcreview puts the 3TB raids as faster the blacks.  that review is 2years old though
23:14 Leolo valladecin - that would be the exact same thing.  ports >1024 are as turst worth as those <1024
23:15 JoeJulian @ports
23:15 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
23:19 ninkotech__ joined #gluster
23:27 neurodrone__ joined #gluster
23:33 andreask joined #gluster
23:38 morse joined #gluster
23:48 morse joined #gluster
23:49 kkeithley1 joined #gluster
23:51 Leolo people still around?
23:52 Leolo I had a question about coordinating MySQL replication with gluster replication
23:52 Leolo but I guess my answer is "write() doesn't return until all replication has happened" which means MySQL might lag behind gluster, but never the opposite
23:55 Leolo other question - docs talk about bonding interfaces.  My understanding is that under Linux, bonding doesn't increase through-put.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary