Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:11 plarsen joined #gluster
00:27 natarej joined #gluster
01:01 haomaiwang joined #gluster
01:33 Lee1092 joined #gluster
01:46 haomaiwang joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:49 haomaiwang joined #gluster
02:01 haomaiwang joined #gluster
02:02 harish joined #gluster
02:09 m0zes joined #gluster
02:59 atinm joined #gluster
03:01 haomaiwang joined #gluster
03:01 m0zes joined #gluster
03:10 ramteid joined #gluster
03:20 kdhananjay joined #gluster
03:37 rafi joined #gluster
03:46 itisravi joined #gluster
04:01 haomaiwang joined #gluster
04:20 gowtham joined #gluster
04:23 rafi joined #gluster
04:27 nehar joined #gluster
04:30 gem joined #gluster
04:32 hchiramm joined #gluster
04:36 nbalacha joined #gluster
04:39 glafouille joined #gluster
04:40 JesperA joined #gluster
04:41 jiffin joined #gluster
04:41 overclk joined #gluster
04:48 hgowtham joined #gluster
04:50 shubhendu joined #gluster
04:51 atinm joined #gluster
04:54 armyriad joined #gluster
05:01 haomaiwang joined #gluster
05:09 prasanth joined #gluster
05:12 Apeksha joined #gluster
05:14 ppai joined #gluster
05:16 mowntan joined #gluster
05:17 rafi joined #gluster
05:26 Siavash_ joined #gluster
05:27 rastar joined #gluster
05:49 RameshN joined #gluster
05:49 Wizek__ joined #gluster
05:52 sakshi joined #gluster
05:53 spalai joined #gluster
05:56 Wizek joined #gluster
05:57 kotreshhr joined #gluster
06:01 haomaiwang joined #gluster
06:03 aravindavk joined #gluster
06:03 Wizek_ joined #gluster
06:07 ashiq joined #gluster
06:07 Wizek joined #gluster
06:09 aspandey joined #gluster
06:10 jtux joined #gluster
06:20 hchiramm joined #gluster
06:24 Debloper joined #gluster
06:26 m0zes joined #gluster
06:26 atalur joined #gluster
06:28 Bhaskarakiran joined #gluster
06:28 karnan joined #gluster
06:31 Mmike joined #gluster
06:31 level7_ joined #gluster
06:32 poornimag joined #gluster
06:39 R0ok_ joined #gluster
06:40 natarej joined #gluster
06:42 Siavash_ joined #gluster
06:50 fsimonce joined #gluster
06:52 Debloper joined #gluster
06:55 jtux joined #gluster
06:57 Gnomethrower joined #gluster
07:00 deniszh joined #gluster
07:01 ivan_rossi joined #gluster
07:01 ivan_rossi left #gluster
07:01 haomaiwang joined #gluster
07:07 pur__ joined #gluster
07:09 [Enrico] joined #gluster
07:11 ramky joined #gluster
07:18 [Enrico] joined #gluster
07:20 m0zes joined #gluster
07:26 hackman joined #gluster
07:37 Slashman joined #gluster
07:38 Saravanakmr joined #gluster
07:38 TvL2386 joined #gluster
07:44 level7 joined #gluster
07:52 lord4163 joined #gluster
07:57 saltsa joined #gluster
07:58 unforgiven512 joined #gluster
07:58 kovshenin joined #gluster
08:01 haomaiwang joined #gluster
08:12 ahino joined #gluster
08:17 m0zes joined #gluster
08:40 aspandey_ joined #gluster
08:40 itisravi joined #gluster
08:41 atalur joined #gluster
08:42 kovshenin joined #gluster
08:44 cyberbootje joined #gluster
08:47 rafi joined #gluster
08:47 level7_ joined #gluster
09:01 haomaiwang joined #gluster
09:34 m0zes joined #gluster
09:35 aspandey_ joined #gluster
09:40 itisravi joined #gluster
09:42 Saravanakmr joined #gluster
09:44 Test-ramesh joined #gluster
09:59 harish joined #gluster
10:01 haomaiwang joined #gluster
10:12 level7 joined #gluster
10:13 Saravanakmr joined #gluster
10:14 level7_ joined #gluster
10:15 Jules- joined #gluster
10:25 m0zes joined #gluster
10:25 glafouille joined #gluster
10:36 aravindavk_ joined #gluster
10:39 gem joined #gluster
10:39 DV_ joined #gluster
10:45 arcolife joined #gluster
10:47 m0zes joined #gluster
10:47 nbalacha joined #gluster
10:50 nbalacha joined #gluster
11:00 glafouille joined #gluster
11:01 haomaiwang joined #gluster
11:07 aravindavk joined #gluster
11:19 luizcpg joined #gluster
11:21 johnmilton joined #gluster
11:27 haomaiwang joined #gluster
11:28 atalur joined #gluster
11:28 atalur joined #gluster
11:36 gem joined #gluster
11:37 ppai joined #gluster
12:01 haomaiwang joined #gluster
12:11 kotreshhr left #gluster
12:18 Lee1092 joined #gluster
12:26 jvandewege joined #gluster
12:35 d0nn1e joined #gluster
12:36 _fortis joined #gluster
12:54 natarej joined #gluster
12:57 karnan joined #gluster
13:01 F2Knight joined #gluster
13:14 rafi1 joined #gluster
13:24 Elmo_ joined #gluster
13:24 spalai joined #gluster
13:24 Elmo_ Hi
13:24 glusterbot Elmo_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:25 Elmo_ I have an issue where rebalancing a 4 brick config (2 replicas, 2 distrued)
13:25 Elmo_ hangs. If I try to stop the volume, I get the
13:25 Elmo_ failed: rebalance session is in progress for the volume
13:26 Elmo_ error
13:26 Elmo_ then trying to do rebalance status on it says
13:26 Elmo_ failed: Volume gv0 is not a distribute volume or contains only 1 brick
13:26 Elmo_ so kind of stuck in between, anybody have a clue
13:28 spalai left #gluster
13:34 nehar joined #gluster
13:36 rafi joined #gluster
13:37 Slashman joined #gluster
13:38 rafi joined #gluster
13:42 Gnomethrower joined #gluster
13:46 atinm joined #gluster
13:48 jiffin joined #gluster
14:05 haomaiwang joined #gluster
14:15 nbalacha joined #gluster
14:21 luizcpg_ joined #gluster
14:27 kotreshhr joined #gluster
14:28 owlbot joined #gluster
14:32 jiffin joined #gluster
14:33 gem joined #gluster
14:58 Slashman joined #gluster
15:01 haomaiwang joined #gluster
15:02 dlambrig_ joined #gluster
15:30 rafi1 joined #gluster
15:31 kotreshhr left #gluster
15:39 rafi joined #gluster
15:54 m0zes joined #gluster
16:01 haomaiwang joined #gluster
16:04 m0zes joined #gluster
16:06 armyriad joined #gluster
16:11 armyriad joined #gluster
16:33 F2Knight joined #gluster
16:49 luizcpg joined #gluster
16:59 Bhaskarakiran joined #gluster
16:59 kotreshhr joined #gluster
17:00 kotreshhr left #gluster
17:01 haomaiwang joined #gluster
17:15 julim joined #gluster
17:45 MikeLupe joined #gluster
17:51 mowntan joined #gluster
17:51 mowntan joined #gluster
18:01 haomaiwang joined #gluster
18:24 deniszh joined #gluster
18:24 alvinstarr joined #gluster
18:29 deniszh joined #gluster
18:39 F2Knight joined #gluster
18:43 rafi joined #gluster
18:45 m0zes joined #gluster
18:54 ic0n joined #gluster
18:58 ahino joined #gluster
19:01 haomaiwang joined #gluster
19:07 johnmilton joined #gluster
19:14 luizcpg joined #gluster
19:17 alvinstarr joined #gluster
19:29 johnmilton joined #gluster
19:51 m0zes joined #gluster
19:52 petan joined #gluster
20:01 haomaiwang joined #gluster
20:20 plarsen joined #gluster
20:21 deniszh1 joined #gluster
20:46 F2Knight joined #gluster
20:57 MikeLupe Hi - I pee in my pants, because I'm about to expand my r3a1 volume with another disk in each of the two "data" hosts. Maybe someone could guide me?
20:57 DV_ joined #gluster
21:01 haomaiwang joined #gluster
21:07 post-factum MikeLupe: https://www.huggies.com/en-us/#expecting-mom
21:07 glusterbot Title: Huggies Diapers, Baby Products, Supplies & Coupons! (at www.huggies.com)
21:07 post-factum MikeLupe: if the question is about gluster volume layout, please, be more specific
21:07 MikeLupe post-factum: I know - thanks
21:08 MikeLupe I'll try
21:09 MikeLupe I've already got a 2nd disk on each of my hosts. Now I'll add a 3rd disk and would like to be able to extend one of the 3 gluster volumes I have on the second disk: ,: /dev/mapper/gluster_vg1-data /gluster/data           xfs     defaults        0 0
21:10 MikeLupe This still is no question, I know...let me try to find the right one
21:13 post-factum MikeLupe: the algorithm is: 1) explain in details what you have; 2) define what you would like to achieve; 3) make a plan
21:13 post-factum MikeLupe: 1) and 2) is up to you, we may halp with 3)
21:13 post-factum *help
21:13 MikeLupe Here is kind of a question: How the heck will I be able to mount the 3rd drive into the same /gluster/data? I know I'm missing something - but what? Don't I have to mount to  /gluster/data, can I mount to another dir?
21:14 post-factum MikeLupe: I don't understand you currentvolume layout. could you please show gluster volume info instead?
21:14 post-factum @paste
21:14 glusterbot post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
21:16 MikeLupe http://pastebin.com/1aTFqigX
21:16 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:16 MikeLupe ? argh
21:17 MikeLupe https://da.gd/iH2We
21:17 glusterbot Title: #372784 Fedora Project Pastebin (at da.gd)
21:18 MikeLupe It's 1 of three gluster volumes on /gluster/
21:18 MikeLupe I haven't prepared the 3rd disk yet - still total virgin, no format no mount
21:19 post-factum MikeLupe: so you have replica 3 arbiter 1, consisting of 3 nodes, 1 brick on each node
21:19 post-factum MikeLupe: and you would like to add 1 disk to 1 node only?
21:21 MikeLupe yes, 1 brick on each node for that volume. I've added on each of the 2 data nodes a 3rd disk. For the arbiter I'll have to deal with the current 2 small disks
21:21 deniszh joined #gluster
21:21 MikeLupe Does that make sense?
21:22 MikeLupe I'm confused, because I only see the way of mounting the 3rd disk into /gluster/data/brick2 directly. Am I totally wrong?
21:24 MikeLupe And additionally confused because of how I', going to handle the LVM logical volumes, before even handling the gluster volume.
21:24 MikeLupe +m
21:25 post-factum MikeLupe: I still do not get your confusion exactly, but in order to extend current layout you have to either add 3 bricks to your volume and make distributed-replicated volume or to extend underlying devices if those are managed by LVM
21:26 MikeLupe yes, second's what I have to do.
21:26 MikeLupe Give me one more try
21:26 post-factum so, /gluster/data/brick1 on each volume is some FS on LVM?
21:27 post-factum *on each node
21:27 MikeLupe http://paste.fedoraproject.org/372789/64361814/
21:27 glusterbot Title: #372789 Fedora Project Pastebin (at paste.fedoraproject.org)
21:27 MikeLupe wht fedoraproject now..
21:28 luizcpg joined #gluster
21:28 MikeLupe So I have: LV Path                /dev/gluster_vg1/data
21:28 MikeLupe And brick on: Brick1: slp-ovirtnode-01.corp.domain.tld:/gluster/data/brick1
21:28 post-factum MikeLupe: ok, if you would like to employ LVM and do not mess with distributed-replicated setup, just google some LVM manual, extend current LV, grow FS, and gluster will handle new space automatically
21:29 MikeLupe Ok thanks - that's enough.
21:30 post-factum MikeLupe: i believe you have to deal with pvcreate, vgextend and lvresize commands
21:30 MikeLupe I just was (and still am a bit) scared about the growing process and which node order I should take
21:30 MikeLupe yes
21:31 post-factum what is your FS? XFS?
21:31 MikeLupe yes
21:31 post-factum you may grow XFS online without any downtime
21:32 MikeLupe ermm ok - I'll do some research on that. Thanks a lot for the hints
21:33 mowntan joined #gluster
21:34 post-factum тз
21:34 post-factum i mean, np
21:40 deniszh1 joined #gluster
21:44 julim joined #gluster
21:49 deniszh joined #gluster
22:01 haomaiwang joined #gluster
22:35 m0zes joined #gluster
22:46 hackman joined #gluster
23:19 al joined #gluster
23:22 JesperA- joined #gluster
23:38 julim joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary