Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 badone__ joined #gluster
00:17 badone_ joined #gluster
00:27 itisravi joined #gluster
00:47 kdhananjay joined #gluster
00:50 plarsen joined #gluster
00:57 tg2 hmm, still getting random crash in 3.6.2 client, http://www.fpaste.org/213619/42957774/
01:08 suliba joined #gluster
01:13 jobewan joined #gluster
01:21 nangthang joined #gluster
01:29 harish_ joined #gluster
01:44 atalur joined #gluster
01:54 RicardoSSP joined #gluster
01:54 RicardoSSP joined #gluster
01:57 harish joined #gluster
02:06 badone_ joined #gluster
02:31 msmith_ joined #gluster
02:32 plarsen joined #gluster
02:35 gem_ joined #gluster
03:10 soumya joined #gluster
03:25 bharata-rao joined #gluster
03:32 nangthang joined #gluster
03:33 wkf joined #gluster
03:41 atinmu joined #gluster
03:45 itisravi joined #gluster
03:46 jobewan joined #gluster
03:46 kdhananjay joined #gluster
03:47 badone__ joined #gluster
03:49 msmith_ joined #gluster
03:49 overclk joined #gluster
04:05 gildub joined #gluster
04:19 wkf joined #gluster
04:28 schandra joined #gluster
04:32 hagarth joined #gluster
04:32 rafi joined #gluster
04:33 kanagaraj joined #gluster
04:40 nbalacha joined #gluster
04:40 kumar joined #gluster
04:41 jiffin joined #gluster
04:42 anoopcs joined #gluster
04:43 pppp joined #gluster
04:48 Bhaskarakiran joined #gluster
04:50 gem joined #gluster
05:00 gem_ joined #gluster
05:05 nangthang joined #gluster
05:05 raghu joined #gluster
05:06 ndarshan joined #gluster
05:09 huleboer joined #gluster
05:14 jtux joined #gluster
05:15 sakshi joined #gluster
05:16 sabansal_ joined #gluster
05:18 deepakcs joined #gluster
05:19 Manikandan joined #gluster
05:19 Manikandan_ joined #gluster
05:24 lalatenduM joined #gluster
05:26 maveric_amitc_ joined #gluster
05:27 gem_ joined #gluster
05:31 skdon joined #gluster
05:32 ashiq joined #gluster
05:37 anil joined #gluster
05:39 rjoseph joined #gluster
05:49 atalur joined #gluster
05:53 mbukatov joined #gluster
05:56 karnan joined #gluster
05:57 mbukatov joined #gluster
05:58 smohan joined #gluster
05:59 mbukatov joined #gluster
05:59 wkf joined #gluster
06:04 kdhananjay joined #gluster
06:09 Anjana joined #gluster
06:14 soumya joined #gluster
06:16 glusterbot News from newglusterbugs: [Bug 1213703] geo-replication status xml output has incorrect grouping of pairs under sessions. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1213703>
06:21 ghenry joined #gluster
06:22 vimal joined #gluster
06:24 atalur joined #gluster
06:41 stemid joined #gluster
06:41 stemid could someone please clarify if I could use glusterFS to share one LUN from a SAN over NFS with HA? for example a raw device mapping of a LUN to two servers acting like NFS file servers.
06:47 meghanam joined #gluster
06:50 jtux joined #gluster
06:58 Anjana joined #gluster
06:58 kshlm joined #gluster
07:00 soumya joined #gluster
07:11 [Enrico] joined #gluster
07:17 glusterbot News from newglusterbugs: [Bug 1213720] [RFE - Snapshot] Snapshot cli operations should not impact scheduled jobs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1213720>
07:19 gem_ joined #gluster
07:26 nangthang joined #gluster
07:33 fsimonce joined #gluster
07:36 lifeofguenter joined #gluster
07:36 ashiq joined #gluster
07:43 DV joined #gluster
07:46 gem__ joined #gluster
07:49 msmith_ joined #gluster
07:55 ktosiek joined #gluster
07:55 ctria joined #gluster
08:02 liquidat joined #gluster
08:03 DV_ joined #gluster
08:16 edong23 joined #gluster
08:20 Norky joined #gluster
08:20 atalur joined #gluster
08:20 T0aD joined #gluster
08:21 anrao joined #gluster
08:27 poornimag joined #gluster
08:39 harish joined #gluster
08:42 sakshi joined #gluster
08:45 soumya joined #gluster
08:46 gem__ joined #gluster
08:47 glusterbot News from newglusterbugs: [Bug 1213752] nfs-ganesha: Multi-head nfs  need Upcall Cache invalidation support <https://bugzilla.redhat.co​m/show_bug.cgi?id=1213752>
09:00 DV joined #gluster
09:09 kokopelli joined #gluster
09:11 nbalachandran_ joined #gluster
09:15 deepakcs joined #gluster
09:19 raghug joined #gluster
09:24 Anjana joined #gluster
09:25 Prilly joined #gluster
09:32 gem_ joined #gluster
09:34 ira joined #gluster
09:34 schandra|away joined #gluster
09:44 p8952 Is there an easy way to see the replication status between two bricks?
09:44 p8952 For example, I have a volume with two bricks configured to replicate and I add a third brick. How do I know when all data has been replicated?
09:47 Guest55217 joined #gluster
09:57 soumya joined #gluster
09:59 Pupeno joined #gluster
10:08 anil joined #gluster
10:11 ctria joined #gluster
10:15 kkeithley2 joined #gluster
10:47 glusterbot News from newglusterbugs: [Bug 1213798] tiering: glusterd doesn't update the tiering info from info file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1213798>
10:47 glusterbot News from newglusterbugs: [Bug 1213802] tiering:volume set command fails for tiered volume after restarting glusterd <https://bugzilla.redhat.co​m/show_bug.cgi?id=1213802>
10:47 glusterbot News from newglusterbugs: [Bug 1213796] systemd integration with glusterfs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1213796>
10:56 jiffin joined #gluster
10:57 smohan joined #gluster
11:02 Anjana joined #gluster
11:09 SOLDIERz joined #gluster
11:16 LebedevRI joined #gluster
11:17 Bhaskarakiran_ joined #gluster
11:20 soumya joined #gluster
11:24 atinmu REMINDER: Gluster Community Bug Triage meeting today at 12:00 UTC in another ~35 mins @ #gluster-meeting
11:24 ndevos :)
11:26 anrao joined #gluster
11:29 raghug joined #gluster
11:32 gem_ joined #gluster
11:40 gildub joined #gluster
11:41 nbalachandran_ joined #gluster
11:43 B21956 joined #gluster
11:46 jiffin joined #gluster
11:48 glusterbot News from newglusterbugs: [Bug 1213821] nfs-ganesha: correct the interactive query thats displayed while executing features.ganesha cmd <https://bugzilla.redhat.co​m/show_bug.cgi?id=1213821>
11:51 anoopcs joined #gluster
11:56 anil joined #gluster
12:00 soumya joined #gluster
12:02 rafi1 joined #gluster
12:05 aravindavk joined #gluster
12:08 raghug joined #gluster
12:10 jiffin1 joined #gluster
12:17 dusmant joined #gluster
12:18 glusterbot News from newglusterbugs: [Bug 1210193] Commands hanging on the client post recovery of failed bricks <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210193>
12:18 glusterbot News from newglusterbugs: [Bug 1205709] ls command blocked when one brick disconnect, even reconnect. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1205709>
12:19 rjoseph joined #gluster
12:20 plarsen joined #gluster
12:23 mbukatov joined #gluster
12:24 mbukatov joined #gluster
12:25 necrogami joined #gluster
12:26 kdhananjay joined #gluster
12:26 xiu JoeJulian: just a follow up, managed to upgrade my gluster cluster to 3.3.2, everything now runs smoothly. thanks for the advices! :) now I have time to plan my upgrade to a fresher release ;)
12:28 dgandhi joined #gluster
12:29 dgandhi joined #gluster
12:29 Anjana joined #gluster
12:30 rafi joined #gluster
12:31 dgandhi joined #gluster
12:32 Gill_ joined #gluster
12:32 dgandhi joined #gluster
12:34 rwheeler joined #gluster
12:39 rjoseph joined #gluster
12:40 sacrelege joined #gluster
12:42 sacrelege hi all, I'm about to setup glusterfs on 3 nodes with each 2 disks and have some questions.
12:43 sacrelege I need to make sure, I don't loose data when a node crashes. So I want to go for replication.
12:44 sacrelege When I want to add 4th node to expand the available disk space, would that work? by using "replica 2" - so I guess you call that "distributed replicated" ?
12:45 hagarth joined #gluster
12:45 stemid left #gluster
12:47 sacrelege or in other words: if I choose 'replica 2' and have 6 bricks how much space is lost due to replication and how much space is lost when I add additional two bricks (total 8, replica 2)
12:47 schandra|away joined #gluster
12:53 atalur joined #gluster
12:53 lifeofguenter joined #gluster
12:57 Manikandan joined #gluster
12:58 ashiq joined #gluster
12:59 mbukatov joined #gluster
12:59 atinmu joined #gluster
12:59 kdhananjay joined #gluster
13:00 bennyturns joined #gluster
13:00 shubhendu joined #gluster
13:01 smohan joined #gluster
13:03 Norky replica 2 means you store everything twice, so 50% fo you raw space 'lost'
13:04 marbu joined #gluster
13:05 sacrelege Norky, but how does this work if I have 6 bricks? since it will take the order of the given bricks in pairs of two?
13:07 itpings joined #gluster
13:11 kanagaraj joined #gluster
13:17 dusmant joined #gluster
13:18 glusterbot News from newglusterbugs: [Bug 905747] [FEAT] Tier support for Volumes <https://bugzilla.redhat.com/show_bug.cgi?id=905747>
13:18 glusterbot News from newglusterbugs: [Bug 1200264] Upcall: Support to handle upcall notifications asynchronously <https://bugzilla.redhat.co​m/show_bug.cgi?id=1200264>
13:18 glusterbot News from newglusterbugs: [Bug 1200266] Upcall: Support to filter out duplicate upcall notifications received <https://bugzilla.redhat.co​m/show_bug.cgi?id=1200266>
13:18 glusterbot News from newglusterbugs: [Bug 1200267] Upcall: Cleanup the expired upcall entries <https://bugzilla.redhat.co​m/show_bug.cgi?id=1200267>
13:18 Norky what do you mean, "how does this work"?
13:23 sacrelege Norky, hmm, I'm a bit confused how much space is really used by replication. If I have 4 bricks and choose "replica 2", 50% of the total space can be used. When I add additional 2 bricks and rebalance.. thats the part where I get confused. Lets say every brick is 1 TB (I've got 6 bricks or 6TB in total). With replica 2, it can't use have of it, otherwise it would be the same as "replica 3" right?
13:24 Norky with "replica 2", the first two bricks specified constitute a replica set
13:25 Norky so for "gluster volume create test-volume replica 2 brick0 brick1 brick2 brick3", brick0 and brick1 are a mirror of each other
13:25 Norky and brick2 and brick3 are another replica set
13:26 Norky then data is then distributed across those replica sets
13:26 Norky what's not to understand?
13:26 Norky http://www.gluster.org/community/docu​mentation/index.php/Gluster_3.2:_Crea​ting_Distributed_Replicated_Volumes
13:26 Norky slight out-of-date docs
13:26 Norky but that section still holds true I think
13:26 hamiller joined #gluster
13:27 georgeh-LT2 joined #gluster
13:27 sacrelege Norky, I get it. thx. I was overthinking something. :) So yes, with 6 bricks with 1 TB/brick, I can use a total of 3TB or like you said 50% :)
13:28 Norky indeed
13:28 Norky and if expanding, add bricks in pairs (more generally, add N bricks, where N is your replica count), and then rebalance
13:30 sacrelege Norky, I guess it makes sense to remove a certain bricks from the volume, then shuffle the new bricks with the old ones and then add them again/rebalance to make sure the newly added replica set is not on the same node.
13:33 lkoranda joined #gluster
13:42 DV joined #gluster
13:45 aravindavk joined #gluster
13:50 msmith joined #gluster
13:50 glusterbot News from resolvedglusterbugs: [Bug 1074095] Build errors on EL5 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1074095>
13:51 Norky the placement depends on the order you specify, so where you have two (or more) bricks on the same server for a replica volume, then yes, pay attention to the ordering
13:52 julim joined #gluster
13:53 Norky e.g. "gluster volume create test-volume replica 2 node{0,1,2}:/brick0 node{0,1,2}:/brick1"
13:54 sacrelege alright. I think this just finished my plan. Thank you very much.
13:54 Norky that uses shell globbing, stick an echo in front of it to see the effect
13:55 Norky such a set up would get a little tricky if you were then to expand it with a single node with two bricks
13:55 Norky see http://blog.gluster.org/2013/01/li​nked-list-topology-with-glusterfs/ or rather the post it links to for an interesting solution
13:56 sacrelege reading...
13:59 sacrelege Norky, right, the "gluster volume replace-brick..." was exactly what I was thinking of. The idea with the "linked list structure" is cool and exactly what I need. many thanks man !
13:59 Norky hey, not my idea :)
14:04 coredump joined #gluster
14:10 bene2 joined #gluster
14:12 coredump joined #gluster
14:15 Anjana joined #gluster
14:29 atinmu joined #gluster
14:32 mbukatov joined #gluster
14:32 DV joined #gluster
14:33 shaunm_ joined #gluster
14:54 bennyturns joined #gluster
14:58 getup joined #gluster
15:00 jmarley joined #gluster
15:07 virusuy_ joined #gluster
15:14 nbalachandran_ joined #gluster
15:18 julim joined #gluster
15:22 bennyturns joined #gluster
15:36 mbukatov joined #gluster
15:46 shubhendu joined #gluster
15:47 mbukatov joined #gluster
15:48 mbukatov joined #gluster
15:49 bennyturns joined #gluster
15:49 getup joined #gluster
15:50 roost__ joined #gluster
15:58 bene2 joined #gluster
16:02 msmith_ joined #gluster
16:03 msmith_ joined #gluster
16:14 poornimag joined #gluster
16:16 rafi joined #gluster
16:20 msmith joined #gluster
16:27 Guest64335 joined #gluster
16:28 wkf joined #gluster
16:28 msmith joined #gluster
16:29 jobewan joined #gluster
16:56 raghug joined #gluster
17:04 julim joined #gluster
17:05 soumya joined #gluster
17:15 edualbus joined #gluster
17:22 ekuric joined #gluster
17:26 roost joined #gluster
17:27 Rapture joined #gluster
17:31 lalatenduM joined #gluster
17:47 rafi joined #gluster
17:49 chirino joined #gluster
17:53 getup joined #gluster
17:57 msmith joined #gluster
18:04 jcastillo joined #gluster
18:13 jcastillo joined #gluster
18:33 jcastill1 joined #gluster
18:51 lifeofguenter joined #gluster
19:13 ricky-ti1 joined #gluster
19:18 jcastillo joined #gluster
19:43 wkf joined #gluster
19:52 lifeofguenter joined #gluster
20:19 ekuric left #gluster
20:23 redbeard joined #gluster
20:26 DV joined #gluster
20:40 lexi2 joined #gluster
20:48 jcastill1 joined #gluster
21:07 haomaiw__ joined #gluster
21:10 Pupeno_ joined #gluster
21:29 jcastillo joined #gluster
21:31 badone_ joined #gluster
21:36 haomaiwa_ joined #gluster
21:39 _Bryan_ joined #gluster
21:41 21WABYM45 joined #gluster
21:54 wkf joined #gluster
21:59 wdennis joined #gluster
22:02 wdennis Hi all -- I removed bricks (repl 2) from one volume, and now want to use on another volume - getting error when I try to add them... volume add-brick: failed: <pathname> is already part of a volume
22:02 wdennis How can I prepare these bricks for reuse?
22:02 JoeJulian I prefer just formatting them. That's the quickest.
22:03 wdennis JoeJulian: OK, thx
22:12 dgandhi joined #gluster
22:16 wdennis Another quick q: does native client still require FUSE?
22:16 wdennis (using CentOS 7.1 / GlusterFS 3.6)
22:17 JoeJulian Always will.
22:17 wdennis JoeJulian: Thx again
22:18 JoeJulian You're welcome.
22:18 wdennis Reading the latest Admin Manual I could find, which is for 3.3.0 -- didn't know if things have changed since that version...
22:23 JoeJulian Tell me the process you used for finding that?
22:27 JoeJulian I'm really curious how that keeps happening.
22:27 JoeJulian wdennis:
22:28 wdennis JoeJulian: I think the Google is to blame, but let me check...
22:30 wdennis JoeJulian: can't quite make out how I got there, but this is the URL:
22:31 JoeJulian I ask, because going to gluster.org and follwing the links to documentation *should* be pretty straightforward.
22:32 JoeJulian But I keep having people talking about how shitty the documentation is, then describing how they're not actually usingit.
22:32 wdennis http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
22:32 JoeJulian misc: Can you redirect that?
22:32 wdennis Was looking for an administrator's manual that has all the relevant commands in it
22:33 JoeJulian gluster.org  .. documentation .. Administrator's Guide
22:34 wdennis I know GitHub has an admin's manual in docs (going from memory here, may be incorrect on details) but it's not in order, or has instructions for rendering and creating a document (like a PDF) from it
22:34 JoeJulian Hmm, so you're one of *them*. ;)
22:35 JoeJulian I've never understood the draw to pdf.
22:35 wdennis Yes, just a guy with limited time who wants to get some Gluster going
22:36 wdennis JoeJulian: does not *need* to be a PDF per se, but a TOC of chapters or something would be nice...
22:36 JoeJulian Agreed.
22:48 haomaiwa_ joined #gluster
22:51 PaulCuzner joined #gluster
23:17 msmith joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary