Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-08-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 chirino joined #gluster
00:08 gildub joined #gluster
00:09 nangthang joined #gluster
00:10 liewegas joined #gluster
00:36 calisto joined #gluster
00:54 gildub joined #gluster
01:08 ct Why do I get this error with 3.7.3 when I run gluster volume status?
01:08 ct [2015-08-03 01:02:36.665772] E [glusterd-op-sm.c:252:glusterd_get_txn_opinfo] (-->/usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_big_locked_handler+0x30) [0x7f2090e43970] -->/usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(__glusterd_handle_stage_op+0x1be) [0x7f2090e455ee] -->/usr/lib64/glusterfs/3.7.3/xlator/mgmt/glusterd.so(glusterd_get_txn_opinfo+0x147) [0x7f2090e566e7] ) 0-management: Unable
01:08 glusterbot ct: ('s karma is now -98
01:09 ct to get transaction opinfo for transaction ID : 1c1e9cbe-cc5e-432c-866c-c5a625d66e23 [Permission denied]
01:27 plarsen joined #gluster
01:32 afics joined #gluster
01:37 Lee1092 joined #gluster
01:38 csim joined #gluster
01:40 harish joined #gluster
01:45 nangthang joined #gluster
02:09 haomai___ joined #gluster
02:10 kevein joined #gluster
02:22 harish joined #gluster
02:24 kdhananjay joined #gluster
02:26 haomaiwa_ joined #gluster
02:51 harish joined #gluster
02:54 julim joined #gluster
03:03 kshlm joined #gluster
03:08 bharata-rao joined #gluster
03:12 Manikandan joined #gluster
03:16 Manikandan joined #gluster
03:19 aaronott joined #gluster
03:22 nishanth joined #gluster
03:25 calisto joined #gluster
03:36 julim joined #gluster
03:43 itisravi joined #gluster
03:43 atinm joined #gluster
03:49 kanagaraj joined #gluster
03:59 TheSeven joined #gluster
04:00 ppai joined #gluster
04:02 shubhendu joined #gluster
04:06 ramky joined #gluster
04:18 hgowtham joined #gluster
04:22 sakshi joined #gluster
04:27 jwd joined #gluster
04:30 jwaibel joined #gluster
04:33 ira joined #gluster
04:37 maveric_amitc_ joined #gluster
04:39 deepakcs joined #gluster
04:44 gem joined #gluster
04:44 ndarshan joined #gluster
04:46 yazhini joined #gluster
04:50 meghanam joined #gluster
04:51 ramteid joined #gluster
04:51 aravindavk joined #gluster
05:07 jiffin joined #gluster
05:08 vimal joined #gluster
05:16 SOLDIERz joined #gluster
05:19 pppp joined #gluster
05:19 RameshN joined #gluster
05:26 kotreshhr joined #gluster
05:28 ashiq joined #gluster
05:29 aaronott joined #gluster
05:35 maveric_amitc_ joined #gluster
05:40 kanagaraj joined #gluster
05:44 prabu joined #gluster
05:48 jwd joined #gluster
05:49 skoduri joined #gluster
05:50 overclk joined #gluster
05:55 nbalacha joined #gluster
05:59 ashiq joined #gluster
06:00 Bhaskarakiran joined #gluster
06:08 R0ok_ joined #gluster
06:09 hgowtham joined #gluster
06:11 spalai joined #gluster
06:17 anil joined #gluster
06:22 kdhananjay joined #gluster
06:24 vmallika joined #gluster
06:38 nangthang joined #gluster
06:38 kanagaraj joined #gluster
06:42 nishanth joined #gluster
06:48 meghanam joined #gluster
06:49 anil joined #gluster
06:49 spalai1 joined #gluster
06:57 hchiramm joined #gluster
06:59 Manikandan joined #gluster
07:05 Slashman joined #gluster
07:06 atalur joined #gluster
07:09 [Enrico] joined #gluster
07:10 rjoseph joined #gluster
07:11 Philambdo joined #gluster
07:14 TvL2386 joined #gluster
07:35 helenqiu joined #gluster
07:35 fsimonce joined #gluster
07:36 gem joined #gluster
07:39 sakshi joined #gluster
07:40 pcaruana joined #gluster
07:43 karnan joined #gluster
07:46 auzty joined #gluster
07:47 ctria joined #gluster
07:52 gem_ joined #gluster
07:58 spalai joined #gluster
08:15 LebedevRI joined #gluster
08:19 elico joined #gluster
08:23 Philambdo joined #gluster
08:27 meghanam joined #gluster
08:28 hgowtham joined #gluster
08:35 jcastill1 joined #gluster
08:40 jcastillo joined #gluster
08:49 Manikandan joined #gluster
08:55 nbalacha joined #gluster
08:55 atalur joined #gluster
08:57 mbukatov joined #gluster
08:59 spalai joined #gluster
09:05 kaushal_ joined #gluster
09:18 hgowtham_ joined #gluster
09:18 jcastill1 joined #gluster
09:20 atalur joined #gluster
09:20 harish joined #gluster
09:23 jcastillo joined #gluster
09:33 hgowtham joined #gluster
09:47 atinm joined #gluster
09:55 nbalacha joined #gluster
09:55 sakshi joined #gluster
10:01 Administrator__ joined #gluster
10:07 prg3_ joined #gluster
10:08 and`_ joined #gluster
10:08 vimal joined #gluster
10:09 Manikandan joined #gluster
10:09 ajames-41678 joined #gluster
10:09 ramky joined #gluster
10:13 kshlm joined #gluster
10:19 maveric_amitc_ joined #gluster
10:26 atinm joined #gluster
10:42 jcastill1 joined #gluster
10:44 gem joined #gluster
10:47 jcastillo joined #gluster
10:48 hgowtham_ joined #gluster
10:50 spalai1 joined #gluster
10:53 gem joined #gluster
10:59 gem joined #gluster
11:00 p8952 joined #gluster
11:06 firemanxbr joined #gluster
11:06 ashiq joined #gluster
11:07 Manikandan joined #gluster
11:08 ashiq joined #gluster
11:13 hgowtham__ joined #gluster
11:14 pcaruana joined #gluster
11:16 atalur joined #gluster
11:20 ashiq- joined #gluster
11:20 Manikandan_ joined #gluster
11:21 nbalacha joined #gluster
11:23 hgowtham joined #gluster
11:24 Manikandan joined #gluster
11:33 ashiq joined #gluster
11:34 Manikandan_ joined #gluster
11:43 Manikandan__ joined #gluster
11:43 julim joined #gluster
11:50 ajames41678 joined #gluster
11:50 hgowtham joined #gluster
11:52 Manikandan_ joined #gluster
11:54 nsoffer joined #gluster
11:56 jrm16020 joined #gluster
11:58 kanagaraj joined #gluster
11:59 Manikandan__ joined #gluster
11:59 RedW joined #gluster
12:00 Lee- joined #gluster
12:01 overclk joined #gluster
12:02 Mr_Psmith joined #gluster
12:04 gildub joined #gluster
12:07 itisravi_ joined #gluster
12:12 rjoseph joined #gluster
12:14 hgowtham joined #gluster
12:17 ppai joined #gluster
12:19 unclemarc joined #gluster
12:23 kotreshhr left #gluster
12:32 elico joined #gluster
12:33 kanagaraj joined #gluster
12:37 plarsen joined #gluster
12:56 mikemol So, without any real idea why Gluster 3.7.3 libgfapi throws "permission denied" errors in bareos's face when it tries to open existing files, I'm either going to have to downgrade to a 3.6-series gluster, or switch to the FUSE-based system and hope the memory leaks are gone. :(
12:58 DV joined #gluster
13:01 jcastill1 joined #gluster
13:06 nsoffer joined #gluster
13:07 jcastillo joined #gluster
13:09 overclk joined #gluster
13:14 aaronott joined #gluster
13:23 ajames-41678 joined #gluster
13:24 haomai___ joined #gluster
13:27 theron joined #gluster
13:29 B21956 joined #gluster
14:01 overclk joined #gluster
14:01 shaunm joined #gluster
14:02 haomaiwa_ joined #gluster
14:08 calisto joined #gluster
14:22 aaronott joined #gluster
14:29 sac joined #gluster
14:29 hagarth joined #gluster
14:32 PatNarciso joined #gluster
14:35 jcastill1 joined #gluster
14:41 jcastillo joined #gluster
14:45 jbautista- joined #gluster
14:52 nsoffer joined #gluster
14:58 bennyturns joined #gluster
15:02 haomaiwa_ joined #gluster
15:04 mpietersen joined #gluster
15:22 theron joined #gluster
15:23 wushudoin joined #gluster
15:26 beardyjay joined #gluster
15:26 DV joined #gluster
15:36 B21956 joined #gluster
15:41 beardyjay I have inherited a server which uses a GlusterFS to store backups. I have a issue where
15:41 beardyjay its running out of space, checking the volume status etc I can see that the disks/bricks
15:41 beardyjay are different sizes. The .172 host is fine for space but .173 is about to run out. details
15:41 beardyjay is here: http://pastebin.com/raw.php?i=iaUCfMQm.
15:41 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:42 beardyjay @paste
15:42 glusterbot beardyjay: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
15:43 beardyjay Pastebin with my question: http://pastebin.com/raw.php?i=9WuHidty :)
15:43 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:43 dgandhi joined #gluster
15:44 elico joined #gluster
15:44 cholcombe joined #gluster
15:45 Marqin_ joined #gluster
15:50 l0uis beardyjay: What's your question? It's generally recommended for bricks to be the same size.
15:50 l0uis However, I think newer versions of gluster handle this better than older versions.
15:51 beardyjay Could I add a new brick to the volume on the .173 server so the volume would see more space?
15:52 l0uis no, since you have replica x2 you'd have to add a new brick to both nodes. also, you'd still run into this problem if the bricks are not homogenous in size.
15:52 l0uis the easiest solution is to make /dev/sdb1 == 400 G
15:53 beardyjay I got the server passed over to me, if I was to do it I would of kept things consistant :) The naming of the bricks are strange to (, in one of the names)
15:53 l0uis as far as i know. i'm no expert.
15:53 beardyjay okay thanks, issue is /dev/sdb1 is not LVM and is XFS and i dont think parted etc support XFS etc
15:54 beardyjay no bother l0uis thanks for the feedback
15:54 l0uis well xfs can grow right?
15:54 l0uis btw
15:54 beardyjay yeah it can but I would have to extend the disk first of all
15:54 l0uis gluster does not care about what is behind the brick. it cares only about mount point. so, if you take /dev/sdb1 offline, fix it up using lvm or whatever, and bring it back you'll be fine as long as the mount point is the same
15:55 gem joined #gluster
15:55 calisto joined #gluster
15:55 l0uis beardyjay: i recently migrated some of my bricks form 2x raid-0 3TB to 1x 6TB, for example.
15:56 beardyjay ah nice l0uis, that does help me out thanks
15:58 l0uis final suggestion, if you have more slots, just add 2 new 200G bricks. you're total volume space will then be 400G, so you won't utilize that big /dev/sdb1 on 172, but it will give you double the space.
16:00 bennyturns joined #gluster
16:02 haomaiwa_ joined #gluster
16:06 theron_ joined #gluster
16:06 _maserati joined #gluster
16:16 jiffin joined #gluster
16:19 cc1 joined #gluster
16:25 cc1 JoeJulian: Good morning
16:26 calavera joined #gluster
16:34 skoduri joined #gluster
16:58 Rapture joined #gluster
17:02 haomaiwa_ joined #gluster
17:02 chirino joined #gluster
17:03 cabillman joined #gluster
17:06 pcaruana joined #gluster
17:15 jwd joined #gluster
17:16 aravindavk joined #gluster
17:17 jwaibel joined #gluster
17:22 nsoffer joined #gluster
17:23 rotbeard joined #gluster
17:27 overclk joined #gluster
17:27 dijuremo joined #gluster
17:28 nsoffer joined #gluster
17:28 Slashman joined #gluster
17:35 calavera joined #gluster
17:40 _Bryan_ joined #gluster
17:41 wushudoin| joined #gluster
17:44 cc1 I've been dealing with timeout issues when creating a new pool on two hosts on the same subnet. DNS is working fine on the hosts. When I try to create the pool, i get a timeout. The work around is adding entries in /etc/hosts for each, with both the fqdn and the short. Seems like a DNS issue, but what is GlusterFS looking for and how?
17:44 tesaf left #gluster
17:44 nzero joined #gluster
17:47 wushudoin| joined #gluster
17:50 l0uis cc1: What do you mean "what is GlusterFS looking for" ?
17:52 cc1 It's obvious that its doing a DNS query, but i can't seem to find any errors doing host/nslookup manually. So I'm wondering how gluster is doing the query and what response it is expecting.
17:54 cc1 The weird thing is I even tried probing just IPs and it still had timeout issues.
17:54 l0uis if that is the case why would modifying /etc/hosts change anything ?
17:56 cc1 well i didn't try modifying /etc/hosts with them probed with IPs. I was troubleshooting and tried that initially. I just got back and had one of those "hey i didn't try this, lets just cover all possibilities" thoughts. sure enough, it worked
17:56 l0uis My point is: an entry in /etc/host is essentially equivalent to using an IP since the DNS server is no longer involved.
17:57 l0uis Therefore, using an IP and using /etc/hosts should yield the same result barring typos
17:57 l0uis That it doesnt implies something else is going on.
17:59 cc1 can't think of anything else going on since these are clean installs of CentOS 6.6. Matches environment in lab.
18:02 haomaiwa_ joined #gluster
18:06 calavera joined #gluster
18:12 theron joined #gluster
18:22 B21956 joined #gluster
18:42 theron joined #gluster
19:02 haomaiwa_ joined #gluster
19:07 cyberswat joined #gluster
19:13 TheCthulhu1 joined #gluster
19:18 michaeljk joined #gluster
19:20 nsoffer joined #gluster
19:27 ctria joined #gluster
19:34 nsoffer joined #gluster
19:43 plarsen joined #gluster
20:01 calavera joined #gluster
20:02 7GHAAVKPV joined #gluster
20:04 telmich joined #gluster
20:05 telmich is it safe to upgrade from gluster 3.6.3 to 3.7.3?
20:50 calavera joined #gluster
20:57 cc1 joined #gluster
20:57 papamoose1 joined #gluster
20:58 natarej just double checking - there's no way to use the full capacity of heterogeneous bricks on a disperse volume is there?
21:01 haomaiwa_ joined #gluster
21:10 nsoffer joined #gluster
21:12 badone__ joined #gluster
21:14 natarej without using tiers
21:25 ipmango_ joined #gluster
21:26 calavera joined #gluster
21:30 Twistedgrim joined #gluster
21:37 nzero joined #gluster
21:42 DV joined #gluster
21:54 nzero joined #gluster
22:02 haomaiwa_ joined #gluster
22:31 DV joined #gluster
22:50 calisto joined #gluster
23:01 gildub joined #gluster
23:02 haomaiwang joined #gluster
23:04 jbautista- joined #gluster
23:09 jbautista- joined #gluster
23:14 calavera joined #gluster
23:23 ajames-41678 joined #gluster
23:47 Rapture joined #gluster
23:53 harish joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary