Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:48 om joined #gluster
01:03 sputnik13 joined #gluster
01:11 plarsen joined #gluster
01:13 sputnik13 joined #gluster
01:17 harish_ joined #gluster
01:33 bala joined #gluster
01:57 harish_ joined #gluster
02:06 DV joined #gluster
03:05 hagarth joined #gluster
03:09 gildub joined #gluster
03:20 Guest88683 joined #gluster
03:46 nbalachandran joined #gluster
03:58 spandit joined #gluster
03:59 kanagaraj joined #gluster
03:59 itisravi joined #gluster
04:07 kdhananjay joined #gluster
04:08 ppai joined #gluster
04:31 ndarshan joined #gluster
04:39 sputnik13 joined #gluster
04:42 RameshN joined #gluster
04:42 shubhendu joined #gluster
04:44 sputnik13 joined #gluster
04:49 nishanth joined #gluster
04:49 pureflex joined #gluster
04:50 sputnik13 joined #gluster
04:55 shubhendu joined #gluster
05:04 sputnik13 joined #gluster
05:06 rjoseph joined #gluster
05:10 jv115 joined #gluster
05:18 jv115 Hi all, just wondering if there is someone here who can help me with an issue I'm having with a brick going offline in 3.5.0
05:20 sputnik13 joined #gluster
05:21 jv115 joined #gluster
05:26 bala1 joined #gluster
05:26 psharma joined #gluster
05:28 vimal joined #gluster
05:32 ramteid joined #gluster
05:32 hagarth joined #gluster
05:32 davinder16 joined #gluster
05:33 rastar joined #gluster
05:33 prasanthp joined #gluster
05:35 lalatenduM joined #gluster
05:37 obelix_ joined #gluster
05:48 rjoseph joined #gluster
05:49 harish_ joined #gluster
05:51 lanning joined #gluster
05:57 vpshastry joined #gluster
06:02 meghanam joined #gluster
06:03 sputnik13 joined #gluster
06:05 silky joined #gluster
06:05 kshlm joined #gluster
06:05 shylesh__ joined #gluster
06:13 rgustafs joined #gluster
06:31 raghu joined #gluster
06:34 Philambdo joined #gluster
06:36 saurabh joined #gluster
06:40 ekuric joined #gluster
06:47 rjoseph joined #gluster
06:50 hagarth joined #gluster
07:00 mbukatov joined #gluster
07:03 atinmu joined #gluster
07:05 haomaiwang joined #gluster
07:08 keytab joined #gluster
07:14 the-me joined #gluster
07:29 eseyman joined #gluster
07:33 ctria joined #gluster
07:45 andreask joined #gluster
07:51 fsimonce joined #gluster
08:00 liquidat joined #gluster
08:18 Pupeno joined #gluster
08:26 ricky-ti1 joined #gluster
08:40 Slashman joined #gluster
08:41 n0de_ joined #gluster
09:03 ndarshan joined #gluster
09:03 aravindavk joined #gluster
09:03 davinder17 joined #gluster
09:08 sputnik13 joined #gluster
09:58 Norky joined #gluster
10:08 bala1 joined #gluster
10:09 rose32 joined #gluster
10:17 rose32 can we consider glusterFS like a "network LVM"?
10:20 ndevos rose32: no, a "network LVM" sounds more like drbd, or possibly ceph with its kernel module, GlusterFS is a scalable filesystem, more like nfs
10:20 aravindavk joined #gluster
10:21 ndevos rose32: if you compare LVM to Gluster, the Physical Volumes match a 'brick', and there is no such thing as a Volume Group in Gluster, and the Logical Volume would be a Gluster Volume
10:22 ndevos rose32: the biggest difference would be that LVM offers block-device access, Gluster a file/posix interface
10:22 ndevos @vocabulary
10:23 hagarth joined #gluster
10:23 rose32 ndevos: well... I'm looking for an alternative that allows to scale and resize the filesystem just adding a new node to the network and at the same time provides resilience like raid
10:23 rwheeler joined #gluster
10:23 rose32 ndevos: I'm new in the field to such kind of filesystems and I don't know which alternative is the best
10:23 ndevos rose32: right, sounds like Gluster would work for you then
10:23 rose32 ndevos: I have discarded hdfs, since it is not posix
10:24 rose32 ndevos: and ceph seems that it is not recomended for production
10:24 rose32 ndevos: so, currently I'm considering gluster
10:25 ndevos rose32: ceph is more targetted for block-device workloads, gluster for filesystem access
10:25 rose32 ndevos: the idea is to provide access to such filesystem to other systems via samba, nfs, and so on
10:26 rose32 ndevos: so, I think that gluster could be a better solution
10:26 ndevos rose32: yes, gluster should surely work for that use-case
10:26 rose32 ndevos: but I'm not sure yet if I'm wrong or not
10:27 rose32 ndevos: moreover, I have seen that the installation of gluster is pretty easy compared with other alternatives like hdfs
10:28 ndevos rose32: yes, gluster is relatively simple to setup, I was amazed the first time I saw it :)
10:30 rose32 ndevos: I guess that if you expand a glusterFS and it is replicated, then the replicas are rebalanced when you add a new node, right?
10:31 ndevos rose32: you can do that, but it is not required, many users do not rebalance after adding a new storage server (requires some network bandwith), the overhead of an unbalanced volume is very low
10:32 rose32 ndevos: mmm it sound good :)
10:33 rose32 rose32: and which kind or replication it uses? I mean... it it like a mirror raid? or it uses the distributed space in a more "clever" strategy?
10:33 glusterbot New news from newglusterbugs: [Bug 1116797] [RHS-LOGGING]: Some of the log messages are not printing the log-level in the logs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1116797> || [Bug 1116782] Please add runtime option to show compile time configuration <https://bugzilla.redhat.co​m/show_bug.cgi?id=1116782>
10:33 rose32 ndevos: and which kind or replication it uses? I mean... it it like a mirror raid? or it uses the distributed space in a more "clever" strategy?
10:34 rose32 ops sorry
10:34 ndevos rose32: when you use replication, the files are stored on two different bricks
10:35 rose32 ndevos: can you configure the number of replicas?
10:35 ndevos rose32: with replication, you need to add bricks in pairs, each pair will have the same contents (contents are stored on the filesystem of the brick, as normal files)
10:36 ndevos rose32: yes, you can, 'replica 2' is most common and best tested, 'replica 3' is used by some with success too, but it gets many improvements in the upcoming glusterfs-3.6
10:38 rose32 ndevos: does this mean that if you want to add a new node in order to resize a glusterfs with replication (2 replicas), you must add two nodes each time?
10:38 ndevos rose32: 'replica 2' is like a raid-1 layer, the scalability is done with 'distribute' which is similar to a raid-0 layer where chunks/stripes are files
10:38 ndevos rose32: yes, 'replica 2' requires you to add 2 bricks each time
10:38 haomaiwang joined #gluster
10:39 ndevos rose32: a brick is a filesystem on a storage server, and storage servers can host many bricks - but you dont want a brick-pair to be located on the same server (defeats the redundancy on server outage)
10:40 rose32 I see... and if you use the same server it has some impact on the performance (I guess)
10:41 nbalachandran joined #gluster
10:43 vpshastry joined #gluster
10:47 rose32 ndevos: and do you have some feedback about performance? I guess that the load of a replicated glusterfs is distributed among the different briks, right?
10:52 edward1 joined #gluster
10:53 sputnik13 joined #gluster
10:57 haomaiw__ joined #gluster
11:04 burn420 joined #gluster
11:08 aravindavk joined #gluster
11:08 ndevos rose32: in general, when a file is read from a brick, the brick that is 1st with responding on the LOOKUP (like 'file exists') gets contacted for reading first too
11:08 ndevos rose32: the replication is done client-side, so for a 'replica 2' the write costs 2x the bandwidht compared to reading
11:11 rose32 ndevos: mmm I'm reading something about lustre and it's architecture seems more complex than gluster
11:12 ndevos rose32: I never really looked at lustre
11:13 kiwnix joined #gluster
11:13 ppai joined #gluster
11:13 rose32 ndevos: have u used lustre as a way to store and use the VMs of a VMWare ESX environment? Or could it be used for such kind of purposes?
11:14 rose32 ndevos: I have just read a "WhatsNew 3.3" and it says something about granular locking and VM which it seems something interesting
11:14 ndevos rose32: lustre? no, never used it
11:14 rose32 ndevos: no, I mean gluster
11:14 rose32 sorry
11:15 ndevos rose32: yeah, I think gluster can be used over NFS with vmware esx, but I dont have any experience with that myself
11:15 rose32 ndevos: ok, thank you for all your information :)
11:15 rose32 ndevos: I think I'm going to try gluster :)
11:17 ndevos rose32: you're welcome, let us know here or on the ,,(mailinglists) if you have questions or a success to report
11:17 glusterbot rose32: http://www.gluster.org/interact/mailinglists
11:18 rose32 ndevos: no problem :)
11:20 hagarth joined #gluster
11:23 vpshastry joined #gluster
11:26 andreask joined #gluster
11:27 lkoranda joined #gluster
11:32 kkeithley joined #gluster
11:58 Amit_ joined #gluster
12:03 Slashman joined #gluster
12:04 ctria joined #gluster
12:10 coredump joined #gluster
12:11 aravindavk joined #gluster
12:19 sputnik13 joined #gluster
12:20 sputnik13 joined #gluster
12:21 LebedevRI joined #gluster
12:22 Bardack joined #gluster
12:25 itisravi_ joined #gluster
12:28 systemonkey2 joined #gluster
12:28 glusterbot` joined #gluster
12:29 ws2k3 joined #gluster
12:29 Georgyo joined #gluster
12:29 Ch3LL__ joined #gluster
12:30 Peanut_ joined #gluster
12:30 firemanxbr joined #gluster
12:30 Nopik joined #gluster
12:30 l0uis_ joined #gluster
12:30 mjrosenb_ joined #gluster
12:30 lezo_ joined #gluster
12:30 fyxim_ joined #gluster
12:31 glusterbot joined #gluster
12:31 troj joined #gluster
12:31 decimoe joined #gluster
12:31 Nopik joined #gluster
12:31 lezo_ joined #gluster
12:31 fyxim_ joined #gluster
12:31 atrius` joined #gluster
12:32 mwoodson joined #gluster
12:33 eightyeight joined #gluster
12:37 suliba_ joined #gluster
12:38 SNow joined #gluster
12:38 SNow joined #gluster
12:38 harish_ joined #gluster
12:38 B21956 joined #gluster
12:39 ninkotech__ joined #gluster
12:39 ninkotech_ joined #gluster
12:39 ninkotech joined #gluster
12:53 bennyturns joined #gluster
12:54 ctria joined #gluster
12:54 theron joined #gluster
12:54 ninkotech__ joined #gluster
12:54 theron joined #gluster
12:54 ninkotech joined #gluster
12:54 ninkotech_ joined #gluster
12:56 verdurin joined #gluster
12:58 glusterbot New news from resolvedglusterbugs: [Bug 1115600] Perf: OOM when running performance regression tests(iozone sequential writes) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1115600>
12:58 sroy_ joined #gluster
13:00 julim joined #gluster
13:02 ktosiek_ joined #gluster
13:05 diegows joined #gluster
13:07 Norman_M Hey guys!
13:07 Norman_M We still have some serious Problems
13:07 Norman_M The load on our bricks is very high possibly due to some self heal problems
13:08 Norman_M we have got a lot of error messages saying that self heal is not possible because one gfid is on one of the two bricks
13:09 Norman_M i looked up some of these cases and there is no gfid-file on on of our two bricks
13:10 Norman_M is ist possible to repair this by copying the gfid file from one brick to the other or will it make things worse?
13:12 hagarth joined #gluster
13:17 vpshastry joined #gluster
13:21 chirino joined #gluster
13:27 P0w3r3d joined #gluster
13:31 theron joined #gluster
13:44 kanagaraj joined #gluster
13:48 tdasilva joined #gluster
13:49 Norman_M No ideas?
13:49 n0de joined #gluster
13:54 ninkotech_ joined #gluster
14:04 vpshastry joined #gluster
14:16 nbalachandran joined #gluster
14:21 kkeithley joined #gluster
14:25 mortuar joined #gluster
14:33 ctria joined #gluster
14:35 ninkotech joined #gluster
14:35 ninkotech__ joined #gluster
14:36 mortuar_ joined #gluster
14:39 deepakcs joined #gluster
14:49 bennyturns joined #gluster
14:50 bennyturns joined #gluster
14:55 jbrooks joined #gluster
14:59 ghenry joined #gluster
14:59 ghenry joined #gluster
15:02 JoeJulian @seen semiosis
15:02 glusterbot JoeJulian: semiosis was last seen in #gluster 3 days, 17 hours, 2 minutes, and 34 seconds ago: <semiosis> but if you depend on speedy directory list then yeah that would be bad
15:04 gmcwhistler joined #gluster
15:06 japuzzo joined #gluster
15:20 theron joined #gluster
15:21 theron_ joined #gluster
15:33 nishanth joined #gluster
15:42 sjm joined #gluster
15:51 RameshN joined #gluster
15:54 SFLimey joined #gluster
15:57 semiosis :O
15:57 JoeJulian Good morning.
15:57 semiosis and to you
15:58 JoeJulian pranithk left for the night
15:58 semiosis ok, well i got the message
15:58 JoeJulian figured
15:58 semiosis it's an easy change.  I'll do it today, promise
15:59 JoeJulian :)
15:59 semiosis things have been crazy lately, trying to get a new apartment, and AC is broken in current place...
15:59 JoeJulian Ugh, that does not sound like a fun thing to do there at this time of year.
15:59 semiosis afk for a bit
16:02 haomaiwang joined #gluster
16:09 nbalachandran joined #gluster
16:10 hagarth semiosis++
16:10 hagarth gah, i can never get the karma right
16:15 Norman_M gfid files with zero size and a link count of 1 are dead gfid files, aren't they?
16:16 JoeJulian Yes
16:17 Norman_M ok so i can delete them?
16:17 Norman_M thx :)
16:17 kkeithley @semiosis++
16:17 glusterbot kkeithley: semiosis's karma is now 2000003
16:18 jag3773 joined #gluster
16:18 mjsmith2 joined #gluster
16:24 hagarth @kkeithley++
16:24 glusterbot hagarth: kkeithley's karma is now 1
16:26 kkeithley ;-)
16:28 Mo__ joined #gluster
16:29 Norman_M And here comes another: a gfid file which has a link count of 1 is dead even if it has a filesize, right?
16:30 Norman_M because it is only a hardlink
16:30 keichii joined #gluster
16:31 JoeJulian I would consider any gfid *file* with a link count of 1 as dead. Not true for symlinks.
16:31 purpleidea kkeithley++
16:31 JoeJulian criminy, people. Is it that hard? :P
16:32 purpleidea @kkeithley++
16:32 glusterbot purpleidea: kkeithley's karma is now 2
16:32 ricky-ti1 joined #gluster
16:32 purpleidea JoeJulian: woops
16:32 hagarth @karma
16:32 glusterbot hagarth: Highest karma: "jjulian" (4000000), "semiosis" (2000003), and "kkeithley" (2). Lowest karma: "kkeithley" (2), "semiosis" (2000003), and "jjulian" (4000000).
16:32 _Bryan_ joined #gluster
16:32 * kkeithley doesn't want his karma to go to his head
16:32 hagarth lol
16:33 semiosis @seen jjulian
16:33 glusterbot semiosis: I have not seen jjulian.
16:33 purpleidea JoeJulian: criminy, people. glusterbot needs a smarter parser, what if we replace you with a JoeJulian command @JoeJulian... then it all breaks
16:33 purpleidea @kkeithley+=10
16:33 purpleidea @kkeithley++
16:33 glusterbot purpleidea: kkeithley's karma is now 3
16:34 Norman_M @JoeJulian++
16:34 glusterbot Norman_M: JoeJulian's karma is now 1
16:34 dino82 karma train choo choo
16:34 hagarth purpleidea: do you mean something like this?
16:34 spamtest joined #gluster
16:35 hybrid512 joined #gluster
16:35 spamtest @purpleidea++
16:35 glusterbot spamtest: purpleidea's karma is now 1
16:35 karma_ darn nick karma is already in use
16:35 purpleidea ^^ karma can be faked!!
16:35 hagarth @purpleidea++
16:35 spamtest @kkeithley++
16:35 glusterbot hagarth: purpleidea's karma is now 2
16:35 glusterbot spamtest: kkeithley's karma is now 4
16:35 spamtest @kkeithley++
16:35 glusterbot spamtest: kkeithley's karma is now 5
16:35 spamtest @kkeithley++
16:35 spamtest @kkeithley++
16:35 spamtest @kkeithley++
16:35 glusterbot spamtest: kkeithley's karma is now 6
16:35 spamtest @kkeithley++
16:35 spamtest @kkeithley++
16:35 glusterbot spamtest: kkeithley's karma is now 7
16:35 hagarth @help karma
16:35 spamtest @kkeithley++
16:36 glusterbot spamtest: kkeithley's karma is now 8
16:36 glusterbot spamtest: kkeithley's karma is now 9
16:36 spamtest @kkeithley++
16:36 spamtest @kkeithley++
16:36 kkeithley wtf?
16:36 glusterbot spamtest: kkeithley's karma is now 10
16:36 spamtest @kkeithley++
16:36 glusterbot hagarth: (karma [<channel>] [<thing> ...]) -- Returns the karma of <thing>. If <thing> is not given, returns the top N karmas, where N is determined by the config variable supybot.plugins.Karma.rankingDisplay. If one <thing> is given, returns the details of its karma; if more than one <thing> is given, returns the total karma of each of the things. <channel> is only necessary if the message
16:36 glusterbot hagarth: isn't sent on the channel itself.
16:36 glusterbot spamtest: kkeithley's karma is now 11
16:36 glusterbot spamtest: kkeithley's karma is now 12
16:36 glusterbot spamtest: kkeithley's karma is now 13
16:36 glusterbot spamtest: kkeithley's karma is now 14
16:36 purpleidea ^ how long until it goes to his head?
16:36 purpleidea @karma
16:36 glusterbot purpleidea: Highest karma: "jjulian" (4000000), "semiosis" (2000003), and "kkeithley" (14). Lowest karma: "JoeJulian" (1), "purpleidea" (2), and "kkeithley" (14). You (purpleidea) are ranked 4 out of 5.
16:36 purpleidea @hagarth++
16:36 glusterbot purpleidea: hagarth's karma is now 1
16:36 purpleidea @hagarth++
16:36 purpleidea @hagarth++
16:36 glusterbot purpleidea: hagarth's karma is now 2
16:36 glusterbot purpleidea: hagarth's karma is now 3
16:37 LebedevRI joined #gluster
16:37 hagarth @channelstats
16:37 glusterbot hagarth: On #gluster there have been 327781 messages, containing 12931750 characters, 2136641 words, 7897 smileys, and 1070 frowns; 1633 of those messages were ACTIONs. There have been 143573 joins, 3986 parts, 139942 quits, 24 kicks, 568 mode changes, and 7 topic changes. There are currently 222 users and the channel has peaked at 250 users.
16:38 purpleidea #gluster... our smile to frown ratio is about 7.38 #action is this sufficient for the gluster community?
16:39 hagarth purpleidea: we need more I think
16:40 purpleidea hagarth: agreed. we should compare with the happiness ratio in #ceph too... ;)
16:42 JoeJulian @mp add "(\S+)\+\+" "[$1++]"
16:42 glusterbot JoeJulian: The operation succeeded.
16:42 JoeJulian @mp add "(\S+)\-\-" "[$1--]"
16:43 glusterbot JoeJulian: The operation succeeded.
16:43 purpleidea JoeJulian++
16:43 JoeJulian hmm, worked in test...
16:44 purpleidea (JoeJulian)++
16:44 purpleidea (JoeJulian)\+\+
16:44 purpleidea JoeJulian\+\+
16:44 purpleidea nope
16:44 purpleidea lower case s?
16:45 JoeJulian that would be whitespace.
16:45 purpleidea idk then, sry
16:45 JoeJulian Ah, I see...
16:46 JoeJulian @mp add "(\S+)\+\+" "$1++"
16:46 glusterbot JoeJulian: The operation succeeded.
16:46 purpleidea JoeJulian++
16:46 JoeJulian @mp add "(\S+)\-\-" "$1--"
16:46 glusterbot purpleidea: JoeJulian's karma is now 4
16:46 glusterbot JoeJulian: The operation succeeded.
16:46 hagarth jjulian++
16:46 glusterbot hagarth: jjulian's karma is now 4000001
16:47 purpleidea @mp add "(\S+)\:\ \+\+" "$1++"
16:47 glusterbot purpleidea: Error: You don't have the admin; channel,op capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
16:47 purpleidea i think that ^ would be good too
16:47 purpleidea so you can:
16:47 purpleidea JoeJulian: ++
16:47 purpleidea (tab it out)
16:47 JoeJulian @mp add "(\S+): \+\+" "$1++"
16:47 glusterbot JoeJulian: The operation succeeded.
16:47 purpleidea JoeJulian: ++
16:47 glusterbot purpleidea: JoeJulian's karma is now 5
16:47 purpleidea --
16:47 JoeJulian @mp add "(\S+): \-\-" "$1--"
16:47 glusterbot JoeJulian: The operation succeeded.
16:48 purpleidea sweet
16:48 JoeJulian That's actually kind of funny that I messed up my own nick when I cheated.
16:48 dtrainor joined #gluster
16:48 purpleidea hagarth: now that #gluster is perfect, all that's left is GlusterFS patches :)
16:57 calum_ joined #gluster
16:57 MacWinner joined #gluster
17:00 theron joined #gluster
17:01 Peter3 joined #gluster
17:01 zerick joined #gluster
17:06 sputnik13 joined #gluster
17:07 cfeller joined #gluster
17:14 jobewan joined #gluster
17:30 _Bryan_ joined #gluster
17:36 prasanth joined #gluster
17:38 vimal joined #gluster
17:44 hchiramm_ joined #gluster
17:58 edwardm61 joined #gluster
18:06 sroy_ joined #gluster
18:08 sputnik13 joined #gluster
18:09 sputnik13 joined #gluster
18:32 plarsen joined #gluster
18:49 ramteid joined #gluster
19:02 calum_ joined #gluster
19:10 theron_ joined #gluster
19:19 sroy_ joined #gluster
19:21 theron joined #gluster
19:31 ricky-ticky1 joined #gluster
19:34 dumontster joined #gluster
19:37 dumontster hey all, using gluster 3.4.4 with 1 volume, replica 3 and getting this error on a read from the volume: ls: cannot access /gluster: Transport endpoint is not connected
19:38 dumontster looking into the client logs i see this many times: /usr/lib/x86_64-linux-gnu/glusterfs/3.4.4/xla​tor/performance/quick-read.so(qr_readv+0x62) [0x7f49b7dfc2e2]))) 0-iobuf: invalid argument: iobuf
19:38 dumontster along with: /usr/lib/x86_64-linux-gnu/glusterfs/3.4.4/xla​tor/performance/quick-read.so(qr_readv+0x62) [0x7f49b7dfc2e2]))) 0-iobuf: invalid argument: iobref
19:39 dumontster i found some results for gluster and rdma but im struggling to understand it
19:45 semiosis dumontster: most common causes of that are 1, volume not started, 2, iptables blocking ports, or 3, hostnames not resolving correctly
19:45 semiosis please ,,(pasteinfo)
19:45 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:46 ekuric joined #gluster
19:49 dumontster here is the output of gluster volume info: http://ur1.ca/hoxpq
19:49 glusterbot Title: #116133 Fedora Project Pastebin (at ur1.ca)
19:49 semiosis so we can rule out 1 & 3
19:49 dumontster thanks semiosis
19:49 semiosis please verify that your servers (all of them) allow the needed ,,(ports)
19:50 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
19:50 semiosis which should just be 24007 & 49152
19:50 semiosis if this is the first volume
19:50 semiosis is this in EC2?
19:50 dumontster yes ec2
19:51 semiosis you really shouldn't use local-ipv4 then, those are *going* to change!
19:51 dumontster i am using an Elastic IP bound to one of the nodes in the cluster
19:51 semiosis i recommend making dedicated hostnames for your gluster servers (gluster1.domain.whatever for ex) and CNAMEing those to the public-hostname of your instances
19:52 dumontster i am using the public hostname of the elastic ip
19:52 semiosis the glusterfs client first connects to the server given in the mount command to fetch the volume info, then uses the brick addresses (as shown in volume info) to connect directly to all the bricks
19:54 dumontster so what is the recommended address/dns for ec2 - im not using public addresses, only private
19:54 semiosis that's fine
19:55 semiosis as i said above, make DNS records for gluster, and CNAME those to the public-hostname of the instances
19:55 semiosis then use those when creating the volume
19:55 semiosis public-hostname resolves to the local-ipv4 from within ec2, and the public-ipv4 from outside
19:55 dumontster i see, because the private ip's can change on reboot
19:55 semiosis or if you need to replace a server
19:56 semiosis they would change on a cold boot, not a warm reboot
19:56 dumontster copy that
19:56 semiosis then you just update the CNAME
19:56 semiosis and everything keeps working
19:58 gehaxelt Does anybody know cheap storage servers (500gb-1tb)?
19:59 uebera|| joined #gluster
20:01 dumontster semiosis: my sec groups have all tcp allowed
20:07 glusterbot New news from newglusterbugs: [Bug 1117010] Geo-rep: No cleanup for "CHANGELOG" files at bricks from master and slave volumes. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117010> || [Bug 1117018] Geo-rep: No cleanup for files "XSYNC-CHANGELOG" at working_dir for master volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117018>
20:10 semiosis dumontster: what instance types are you using for servers & clients?
20:11 dumontster gluster servers are: c3.large; clients are c3.xlarge
20:13 semiosis ok, bear with me.  i need a few mins to wrap some stuff up then we're going to do some tests.  do you have 30-45 min to work on this?
20:14 dumontster yeah
20:14 dumontster i found this in the gluster docs which refers to our dns/ip comments: "After peer probe, in the remote machine, the peer machine information is stored with IP address instead of hostname."
20:15 ctria joined #gluster
20:18 semiosis dumontster: ,,(hostnames)
20:18 glusterbot dumontster: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
20:19 semiosis ok first test, from the client machine, can you telnet to port 24007 on each server?
20:20 semiosis can you telnet to port 49152 on each server?
20:25 semiosis dumontster: also, what distro are you using?
20:26 dumontster semiosis: ubuntu 12.04
20:27 semiosis how about those telnet tests?
20:28 dumontster in progress
20:29 dumontster telnet to 24007 success to all 3 G servers
20:29 dumontster semiosis: http://ur1.ca/hoy7b
20:29 glusterbot Title: #116144 Fedora Project Pastebin (at ur1.ca)
20:30 semiosis so far so good.  how about 49152?
20:31 dumontster semiosis: http://ur1.ca/hoy8c
20:31 glusterbot Title: #116145 Fedora Project Pastebin (at ur1.ca)
20:31 dumontster 49152 all good
20:32 semiosis ok next up, privileged ports
20:33 semiosis could you install the package netcat-traditional on one of your servers?
20:33 dumontster sure
20:34 dumontster done
20:35 semiosis ok run this command on the server: sudo nc.traditional -c date -l -p 988
20:35 semiosis this will listen on port 988 & print the date to the first connection
20:35 semiosis then on the client machine, telnet to port 988 of that server
20:35 dumontster ok running
20:35 semiosis you should see the date
20:35 dumontster i dont
20:35 dumontster i ran this command on a G server, correct?
20:35 semiosis hmmm
20:36 dumontster hanging....
20:36 semiosis does your security group allow port 988?
20:36 semiosis iptables?
20:36 dumontster no iptables, ufw is disabled, sec group is 1-65### allowed
20:38 semiosis odd
20:38 semiosis that should work
20:39 semiosis trying to pinpoint if we're hitting a bug in ec2 where connections from priv ports to priv ports dont work on *3.* instances
20:40 semiosis https://forums.aws.amazon.com/threa​d.jspa?threadID=149933&amp;tstart=0
20:40 semiosis my bug report kindly ignored by amazon
20:40 dumontster interesting
20:42 semiosis you should be able to telnet to port 988, since telnet uses a high numbered source port
20:42 dumontster an answer to that question would be really helpful
20:42 semiosis but if we're on an instance affected by this issue, then using a netcat client from a low source port should fail
20:42 semiosis the telnet really should work
20:42 semiosis puzzled why it's not
20:43 dumontster telnet response to 988: telnet: Unable to connect to remote host: Connection refused
20:43 semiosis is your netcat.traditional command still running?
20:43 semiosis are you connecting to the right IP?
20:43 semiosis refused means the port is reachable but nothing is listening on that port
20:44 dumontster this is the same client i telnet 27### and 49### from
20:45 semiosis ok
20:45 dumontster telnet> open 10.145.54.201 49152 Trying 10.145.54.201... Connected to 10.145.54.201.
20:45 dumontster telnet 10.145.54.201 988 Trying 10.145.54.201... telnet: Unable to connect to remote host: Connection refused
20:46 semiosis you need to run the netcat daemon command on 10.145.54.201, sudo nc.traditional -c date -l -p 988
20:46 dumontster so leave that running while i try to telnet?
20:46 semiosis yes
20:47 dumontster got it
20:47 dumontster telnet 10.145.54.201 988 Trying 10.145.54.201... Connected to 10.145.54.201. Escape character is '^]'. Mon Jul  7 20:47:24 UTC 2014
20:47 semiosis ok now re-run that netcat command (it quit after the connection)
20:47 dumontster ok
20:48 semiosis and on the client machine, instead of using telnet, use this command: sudo nc -p 987 10.145.54.201 988
20:48 semiosis do you see the date?
20:48 semiosis or does it hang?
20:48 dumontster returned date
20:48 dumontster sudo nc -p 987 10.145.54.201 988 Mon Jul  7 20:48:56 UTC 2014
20:48 semiosis hrm ok then looks like not affected by this bug
20:49 semiosis is this client the same one you got the transport endpoint not connected error?
20:50 dumontster yes
20:53 semiosis should've done this earlier, but could you please truncate the client log, try mounting the volume, and then pastie.org that client log?
20:54 Ch3LL_ joined #gluster
20:55 dumontster copy that
20:59 Ch3LL_ Is it possible to expand a replicated volume? From what I can see from a google search it is not possible. Just want to verify
21:00 JoeJulian Ch3LL_: expand? You can add bricks, you just need to add them in multiples of your replica count.
21:02 Ch3LL_ ya so I have 2 servers replicating and I want to change that to 4 servers. So i am assuming just 'volume add-brick replica 4 server3:/brick0 server4:/brick0' would do the job?
21:02 Ch3LL_ i initially tried adding a third server and it didn't replicate so maybe i just need to add a fourth
21:05 JoeJulian no
21:06 JoeJulian you should "volume add-brick server3:/brick0 server4:/brick0"
21:08 semiosis hang on, are you trying to change to 4-way replication?  or are you trying to add space but keep it 2-way?
21:08 semiosis just to be clear
21:08 Ch3LL_ i am trying to do a 4-way replication for HA
21:08 JoeJulian If you do replica 4, you're (most likely) wasting hardware.
21:08 gehaxelt Short question: I set up a distributed volume. Is it possible to change it into a replicated volume later on?
21:08 semiosis i agree with JoeJulian
21:08 JoeJulian 4-way replication is (typically) extremely excessive HA. Do you really need milliseconds of downtime annually?
21:09 semiosis or more likely, want 4 web servers each with a local copy of the data, using glusterfs to keep in sync
21:09 JoeJulian gehaxelt: yes, add as many bricks as is necessary to create your replicas and specify "replica N" in the add-brick command.
21:10 gehaxelt JoeJulian, thanks :)
21:10 dencaval joined #gluster
21:11 Ch3LL_ true. It probably is excessive. It is storage for our production apache servers. We currently are just using nfs with no redundancy. We have two sites so figured it would be a good plan to have two machines at each site. Maybe i need to replan my HA plan.
21:11 Ch3LL_ by two sites i mean two locations
21:12 dumontster semiosis: http://ur1.ca/hoywl
21:12 glusterbot Title: #116155 Fedora Project Pastebin (at ur1.ca)
21:12 semiosis Ch3LL_: where do writes come from?
21:13 semiosis dumontster: there's no problem in this log.  that client should be working fine.  if not, make some errors & re-pastie please
21:14 gehaxelt JoeJulian, Do you know something about the "encryption" feature? The wiki/docu states that it's not fully implemented yet?
21:15 dumontster semiosis: right because i trunc'd the log
21:15 JoeJulian I haven't used it. My understanding is over-the-wire is working, but at-rest is not complete.
21:15 dumontster semiosis: thank you for your help!!!
21:15 theron joined #gluster
21:15 semiosis dumontster: yw, but if you had a problem in your config, it should be apparent when a new client sets up
21:15 semiosis it's not
21:16 Ch3LL_ semiosis: I believe the writes are coming from coldfusion. They are coldfusion/apache boxes
21:16 semiosis Ch3LL_: writes coming from everywhere is difficult with multi-site, as gluster doesnt yet have multi-master geo replication
21:16 dumontster semiosis: i cannot determine the cause of the network transport issue - i will have to wait until it happens again
21:17 semiosis dumontster: oooh intermittent?
21:17 semiosis interesting
21:17 dumontster right
21:17 semiosis ah, i didnt catch that earlier
21:17 dumontster the solution is to umount and remount
21:17 theron_ joined #gluster
21:17 Ch3LL_ hmmm. okay i will take a look at a different setup plan
21:18 dumontster copy that - yeah the volume mounts fine, but the errors during operation, not sure the cause
21:18 semiosis dumontster: fortunately in more than 2 years running gluster in ec2, that has been extremely rare for me.  maybe only a couple/few times, briefly.
21:18 dumontster wow what version are you running?
21:18 semiosis for a looong time i was on 3.1.7, then a few months ago i upgraded to 3.4.2
21:18 dumontster semiosis: that's awesome to hear
21:18 semiosis maybe it's been 3 years already
21:19 semiosis time flies
21:19 semiosis aside from a couple major ec2 outages the LAN there has been super reliable
21:20 semiosis us-east-1 that is
21:20 semiosis not sure where you are
21:20 dumontster yeah same
21:21 dumontster semiosis: what instance types are you running?
21:21 dumontster i downsized from r3.xlarge to c3.large
21:21 semiosis my gluster servers are m1.large, everything else is m3 or c3
21:22 semiosis mediums & larges
21:23 semiosis i should try upgrading my servers again.  maybe they fixed that tcp bug
21:23 dumontster yeah appears so
21:23 semiosis the savings would make CEO happy
21:24 dumontster i hear ya...i was running r3.xlarge with provisioned iops at 3x the cost
21:24 dumontster downsized but forgot to redirect the savings into my pocket
21:25 semiosis hah
21:25 dumontster semiosis: i really appreciate your effort, best of luck
21:25 semiosis thx you too
21:27 dumontster left #gluster
21:30 gehaxelt Is it possible to define a maximum brick size per node?
21:30 gehaxelt e.g. one server having offering a 1tb brick. Two other servers offering only 500gb / 250gb ?
21:30 gehaxelt Is that compatible with replica (e.g. 2) ?
21:33 rotbeard joined #gluster
21:35 theron joined #gluster
21:36 semiosis gehaxelt: no
21:36 semiosis gehaxelt: although you *can* divide up your disks into partitions and use those as bricks, having more than one brick per server
21:37 semiosis balancing your replicas among different servers may be tricky
21:37 semiosis but it's possible
21:38 gehaxelt semiosis, okay, Thanks.
21:38 semiosis yw
21:38 semiosis gehaxelt: or instead of partitions, use LVM to split up the disks
21:38 semiosis probably a better idea
21:39 gehaxelt semiosis, great idea!
21:39 gehaxelt well, I'll have to find some cheap storage servers first.
21:39 gehaxelt Then I can set up some kind of backup strategy with glusterfs...
21:39 gehaxelt At least that was my idea.
21:39 semiosis i think someone got gluster running on raspberry pi, those are pretty cheap
21:39 semiosis ;)
21:40 gehaxelt semiosis, right, but hosting my pi @ home at 65kb/s uplink should suck a bit.
21:40 gehaxelt But, I have already thought about that :)
21:48 eightyeight joined #gluster
21:53 sjm left #gluster
22:02 theron joined #gluster
22:02 coredump joined #gluster
22:13 Magnus_ joined #gluster
22:15 Magnus_ Hello 0/   I was wondering if anyone could point me in the right direction for where to look in the documentation for something I am trying to do. I was trying to have 2 nodes both connect to the same iscsi target disc and use gluster to manage their access. I am not sure if gluster supports this, or what it would be called in the documentation.
22:15 semiosis that's not what glusterfs is for
22:17 Magnus_ That's what I was worried about. It seems like I would be better off asigning 1 disc to each node and then connecting them with glusterfs
22:17 semiosis Magnus_: normally you would use local, direct attached storage with glusterfs
22:17 systemonkey joined #gluster
22:18 Magnus_ ok, I will have to do some research then. Thank you.
22:19 semiosis sure, if you have more questions feel free to ask
22:23 Magnus_ I will definitely. Thanks again
22:28 phox joined #gluster
22:47 badone joined #gluster
22:51 fidevo joined #gluster
23:21 verdurin joined #gluster
23:22 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary