Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 bala joined #gluster
00:24 mbelaninja joined #gluster
00:46 theron joined #gluster
01:00 topshare joined #gluster
01:29 dev-zero joined #gluster
01:29 dev-zero joined #gluster
01:39 mbelaninja joined #gluster
01:46 harish joined #gluster
01:47 theron joined #gluster
01:50 plarsen joined #gluster
01:51 DV joined #gluster
02:01 pjschmitt joined #gluster
02:07 coredump joined #gluster
02:19 nangthang joined #gluster
02:26 hflai joined #gluster
02:26 haomaiwa_ joined #gluster
02:31 kshlm joined #gluster
02:44 RC123 joined #gluster
02:46 DV joined #gluster
03:18 theron joined #gluster
03:19 bharata-rao joined #gluster
03:21 gildub joined #gluster
03:30 johnnytran joined #gluster
03:30 kumar joined #gluster
03:59 RameshN joined #gluster
04:03 spandit joined #gluster
04:06 kanagaraj joined #gluster
04:15 nbalacha joined #gluster
04:15 meghanam joined #gluster
04:32 bala joined #gluster
04:36 shubhendu joined #gluster
04:38 mattmcc joined #gluster
04:41 anoopcs joined #gluster
04:42 rafi joined #gluster
04:42 ppai joined #gluster
04:46 Leildin joined #gluster
04:49 jiffin joined #gluster
04:53 ndarshan joined #gluster
04:53 jiffin1 joined #gluster
04:55 theron joined #gluster
04:56 gem joined #gluster
05:11 lalatenduM joined #gluster
05:28 karnan joined #gluster
05:30 schandra joined #gluster
05:30 atinmu joined #gluster
05:43 dusmantkp_ joined #gluster
05:47 schandra joined #gluster
05:48 anil joined #gluster
05:51 Apeksha joined #gluster
05:54 ramteid joined #gluster
05:57 atalur joined #gluster
06:00 kaushal_ joined #gluster
06:04 shubhendu joined #gluster
06:07 vimal joined #gluster
06:07 raghu joined #gluster
06:09 deepakcs joined #gluster
06:13 kdhananjay joined #gluster
06:15 atinmu joined #gluster
06:18 rafi joined #gluster
06:18 kumar joined #gluster
06:22 kdhananjay joined #gluster
06:27 kshlm joined #gluster
06:32 Bhaskarakiran joined #gluster
06:34 rafi joined #gluster
06:35 kanagaraj joined #gluster
06:38 kumar joined #gluster
06:41 RameshN joined #gluster
06:43 atinmu joined #gluster
06:44 theron joined #gluster
06:46 atalur joined #gluster
06:50 meghanam joined #gluster
07:00 nangthang joined #gluster
07:07 meghanam joined #gluster
07:09 atalur joined #gluster
07:12 meghanam joined #gluster
07:14 bala joined #gluster
07:15 schandra joined #gluster
07:21 jtux joined #gluster
07:31 the-me joined #gluster
07:31 fsimonce joined #gluster
07:32 RC123 joined #gluster
07:33 dusmantkp_ joined #gluster
07:33 maveric_amitc_ joined #gluster
07:52 Philambdo joined #gluster
07:53 coreping joined #gluster
07:56 schandra joined #gluster
07:57 [Enrico] joined #gluster
07:59 atalur joined #gluster
08:01 Manikandan joined #gluster
08:03 gildub joined #gluster
08:04 kovshenin joined #gluster
08:13 bala joined #gluster
08:19 hchiramm joined #gluster
08:22 mbukatov joined #gluster
08:23 meghanam joined #gluster
08:33 theron joined #gluster
08:37 glusterbot News from newglusterbugs: [Bug 1197631] glusterd crashed after peer probe <https://bugzilla.redhat.co​m/show_bug.cgi?id=1197631>
08:41 karnan joined #gluster
08:43 ghenry joined #gluster
08:58 o5k joined #gluster
08:59 krishnan_p joined #gluster
09:04 yosafbridge joined #gluster
09:05 xavih joined #gluster
09:05 malevolent joined #gluster
09:06 Telsin joined #gluster
09:07 glusterbot News from newglusterbugs: [Bug 1199906] Changelog: Include leftover changelog into existing htime file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1199906>
09:11 Slashman joined #gluster
09:15 Manikandan joined #gluster
09:15 liquidat joined #gluster
09:19 kdhananjay joined #gluster
09:23 kovshenin joined #gluster
09:48 dusmantkp_ joined #gluster
09:50 Norky joined #gluster
09:57 ThatGraemeGuy joined #gluster
10:09 7JTACI56L joined #gluster
10:18 soumya joined #gluster
10:21 d-fence joined #gluster
10:21 theron joined #gluster
10:22 Manikandan joined #gluster
10:22 RC123 joined #gluster
10:26 malevolent joined #gluster
10:26 xavih joined #gluster
10:31 nachosmooth joined #gluster
10:31 nachosmooth Hi
10:31 glusterbot nachosmooth: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:33 nachosmooth May i do a question? I have a problem to delete a peer. Its out of my network, and i'm not using it in any brick, but i want to remove from my "gluster peer status"
10:42 nachosmooth1 joined #gluster
10:48 deniszh joined #gluster
10:53 dusmantkp_ joined #gluster
10:53 partner nachosmooth: what was the question?
10:53 partner or problem?
10:56 partner gluster peer detach <hostname> is the command you are looking for to remove the old peer
10:57 nachosmooth1 yes, but is not working. The peer is off
10:57 nachosmooth1 the machine was deleted
10:58 nachosmooth1 and gluster peer detach is not working
10:58 partner tried with force ?
11:00 nachosmooth1 gluster peer detach force <hostname> ?
11:00 nachosmooth1 great
11:00 nachosmooth1 it worked
11:00 partner great
11:00 nachosmooth1 thanks partner
11:00 nachosmooth1 ;-)
11:00 partner np
11:02 nachosmooth1 register nachosmooth
11:04 T0aD joined #gluster
11:05 harish joined #gluster
11:07 xavih joined #gluster
11:07 glusterbot News from newglusterbugs: [Bug 1192378] Disperse volume: client crashed while running renames with epoll enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192378>
11:07 malevolent joined #gluster
11:07 xavih joined #gluster
11:09 nachosmooth joined #gluster
11:14 nachosmooth joined #gluster
11:15 nachosmooth1 left #gluster
11:15 rotbeard joined #gluster
11:16 ppai joined #gluster
11:16 meghanam joined #gluster
11:19 nachosmooth joined #gluster
11:19 dusmantkp_ joined #gluster
11:23 nachosmooth left #gluster
11:24 jiffin joined #gluster
11:24 edwardm61 joined #gluster
11:28 atalur joined #gluster
11:29 nachosmooth joined #gluster
11:41 ira joined #gluster
11:45 ppai joined #gluster
11:46 diegows joined #gluster
11:48 jiffin1 joined #gluster
11:53 overclk joined #gluster
11:54 CP|AFK joined #gluster
12:02 jiffin joined #gluster
12:02 soumya_ joined #gluster
12:10 theron joined #gluster
12:13 dusmantkp_ joined #gluster
12:16 bala joined #gluster
12:17 B21956 joined #gluster
12:22 theron joined #gluster
12:30 topshare joined #gluster
12:46 theron joined #gluster
12:52 LebedevRI joined #gluster
12:53 wkf joined #gluster
12:54 lifeofguenter joined #gluster
13:03 hagarth joined #gluster
13:07 topshare joined #gluster
13:08 soumya_ joined #gluster
13:13 aravindavk joined #gluster
13:13 theron joined #gluster
13:24 chirino joined #gluster
13:24 ppai joined #gluster
13:29 ildefonso joined #gluster
13:29 firemanxbr joined #gluster
13:32 dgandhi joined #gluster
13:33 dgandhi joined #gluster
13:34 dgandhi joined #gluster
13:38 coredump joined #gluster
13:40 georgeh-LT2 joined #gluster
13:41 atalur joined #gluster
13:41 nangthang joined #gluster
13:43 [Enrico] joined #gluster
13:49 luis_silva joined #gluster
13:57 jmarley joined #gluster
13:58 bene2 joined #gluster
14:03 topshare joined #gluster
14:15 bennyturns joined #gluster
14:21 kovshenin joined #gluster
14:22 kovshenin joined #gluster
14:24 sprachgenerator joined #gluster
14:29 bala joined #gluster
14:32 rwheeler joined #gluster
14:42 doekia joined #gluster
14:51 theron joined #gluster
15:00 aravindavk joined #gluster
15:08 luis_silva Hey all, I was wondering if there's a way to throttle iops on gluster. We are using gluster to mirror kvm qcow2 imagine files on 2 systems via native clients.
15:10 virusuy joined #gluster
15:15 wushudoin joined #gluster
15:18 xoritor is there a libgfapi based docker-registry storage driver?
15:18 xoritor that would be pretty cool if there were....
15:30 Ramereth joined #gluster
15:31 plarsen joined #gluster
15:43 jmarley joined #gluster
15:48 bennyturns joined #gluster
15:51 ank joined #gluster
15:56 diegows joined #gluster
15:59 anarcat left #gluster
16:00 sputnik13 joined #gluster
16:02 jcarter2 joined #gluster
16:02 Bhaskarakiran joined #gluster
16:06 shubhendu joined #gluster
16:06 jiffin joined #gluster
16:07 jobewan joined #gluster
16:10 firemanxbr joined #gluster
16:10 lifeofguenter joined #gluster
16:11 mkzero joined #gluster
16:13 Pupeno joined #gluster
16:13 Pupeno joined #gluster
16:18 firemanxbr hi guys I have one problem with my gluster cluster in ovirt
16:19 firemanxbr one host inform this error: "Gluster command failed on server."
16:20 firemanxbr status for my gluster service in this host: http://ur1.ca/jvn0w
16:21 shubhendu joined #gluster
16:28 pjschmitt xoritor: did you find out what mellanox "licenses" you guys have?
16:36 yossarianuk joined #gluster
16:37 yossarianuk hi - can anyone help me troubleshoot with a glusterfs issue - I have setup a replicated volume (2 servers) - however when I add files they are not replicated.
16:38 yossarianuk gluster volume status -> shows all online - 'Y' except 'NFS Server' on one of the servers
16:39 yossarianuk but if I do a '/etc/init.d/nfs status' on the server showing as 'N' the status is running - also I can see ports 111 and the NFS one via nmap from the other server
16:41 yossarianuk i.e on one server I get 'NFS Server on localhost                                 N/A     N       N/A'
16:41 yossarianuk however nfs is running
16:42 yossarianuk (just to confirm I need NFS to use glusterfs ?)
16:43 Leildin I don't use nfs to use glusterfs personally
16:44 Leildin I have a samba mount point on one node and access data that way
16:44 Leildin maybe not the best way to do stuff by it works for what I need
16:46 yossarianuk Leildin: thanks
16:47 Leildin pm me for samba config if you want
16:47 yossarianuk Sorry ignore the NFS bit, after restarting the cento5 server when I now use ' gluster volume status' I see 'Y' in all rows of the 'online' column
16:47 yossarianuk but files are not replicated .
16:47 yossarianuk it says 'There are no active volume tasks'
16:48 gem joined #gluster
16:56 yossarianuk ok - i'm using this guide - https://www.howtoforge.com/how-to-i​nstall-glusterfs-with-a-replicated-​volume-over-2-nodes-on-ubuntu-14.04
16:56 yossarianuk I created the volume using the folder (on both servers) - /exports/gluster/gvHANA/
16:57 yossarianuk if I add a file on one of the boxes to  /exports/gluster/gvHANA/ - nothing appears on the other box
16:58 yossarianuk never mond - im off - ill try again tomorrow.
17:06 neofob joined #gluster
17:18 bennyturns joined #gluster
17:19 Rapture joined #gluster
17:36 sputnik13 joined #gluster
17:55 lalatenduM joined #gluster
17:55 bennyturns joined #gluster
17:55 lifeofguenter joined #gluster
17:56 lalatenduM joined #gluster
18:06 luis_silva joined #gluster
18:33 T3 joined #gluster
18:35 ekuric joined #gluster
18:52 virusuy joined #gluster
18:58 theron joined #gluster
19:00 Creeture joined #gluster
19:01 Creeture When I add bricks to a dist-repl volume, are they added in the order specified? Like server1:/brick0 server2:/brick0 server1:/brick1 server2:/brick1 or is there some other determinant?
19:17 nage joined #gluster
19:17 vincent_vdk joined #gluster
19:22 kovshenin joined #gluster
19:24 kovshenin joined #gluster
19:31 DV joined #gluster
20:10 deniszh joined #gluster
20:11 roost joined #gluster
20:12 deniszh left #gluster
20:23 theron joined #gluster
20:25 B21956 left #gluster
20:40 Pupeno_ joined #gluster
20:49 theron joined #gluster
20:55 Rydekull joined #gluster
21:09 glusterbot News from newglusterbugs: [Bug 1200150] NFS mount to XENserver versions 6.2 and 6.5 fails. Incompatible NFS version <https://bugzilla.redhat.co​m/show_bug.cgi?id=1200150>
21:20 roost Hey, so i set up geo-replication in 3.6.2 with ubuntu servers and one server says faulty out of 6 of them. We used to do geo-replication in gluster 3.2 with that server too
21:21 roost what would make it say faulty? any ideas? a config somewhere that I don't know of?
21:28 jackdpeterson joined #gluster
21:35 kiwnix joined #gluster
21:39 jackdpeterson @purpleidea -- ran into an interesting issue: When rebuilding servers from scratch but preserving LVM partition, 1. mounts do not work. Performing vgscan --mknodes get the system to discover the previous lvm volume; however, it has a different UUID than what puppet is forcing. How can that be fixed? When attempting to specify fsuuid => '183e37a7-33ab-45ee-a3b6-644593c4dd44', as en example extract from /dev/disks/by-uuid/* I get invalid fsuuid err
21:41 jackdpeterson looking at the regex on brick.pp -- line ~87 it looks like it ought to be right... but no such luck :-\
21:50 wkf joined #gluster
21:56 purpleidea jackdpeterson: does this (or the subsequent FAQ entry) help: https://github.com/purpleidea/puppet-gluste​r/blob/master/DOCUMENTATION.md#provisioning​-fails-with-cant-open-devsdb1-exclusively
21:56 purpleidea oh btw jackdpeterson did you have trouble with that patch, or did it work out?
21:58 jackdpeterson @purpleidea -- no, neither FAQ entry is exactly what I'm facing. in this case it's an already existing LVM volume (rediscovered w/ vgscan) on effectively a new system with the same hostname. For some reason it expects a different UUID.  manually mounting the FS allows pupept to continue to run and it modified the /etc/fstab; however, it doesn't set it to the right UUID
21:59 jackdpeterson so I'm just trying to diagnose that and see if I can override the uuid that puppet is expecting or a process on how to get an existing LVM volume re-mounted and happy (that can also auto-restore on reboot)
21:59 jackdpeterson @purpleidea -- regarding the patch ... that's on my to-do list. I may just close out the PR and recreate if my local instance is still hosed.
22:08 purpleidea jackdpeterson: ah!
22:09 jackdpeterson of course if I wipe the disk, it re-provisions just fine ... but I'd prefer to not re-synchronize ~ 300+ GB of data (lots of small files) mind you!
22:09 purpleidea jackdpeterson: okay, the answer is puppet-gluster supports either auto generating and setting a UUID, or you can pick a pre-existing one. to use a pre-existing one, set: https://github.com/purpleidea/puppet-glu​ster/blob/master/DOCUMENTATION.md#fsuuid
22:10 purpleidea for at least one puppet run, and then you can either leave it, or take it out. it will have remembered
22:10 jackdpeterson @purpleidea -- look @ my first comment regarding the FSUUID. when set manually the regex is failing for some reason. that FSUUID is what I assume is in /dev/disks/by-uuid that points to /dev/dm-0?
22:11 purpleidea jackdpeterson: i think i understand... my above comment should help, otherwise if it's "failing" then there is perhaps an error?
22:12 purpleidea (to debug, print out what the expected fsuuid puppet sees and compare)
22:14 jackdpeterson Error 400 on SERVER: The chosen fs uuid: '183e37a7-33ab-45ee-a3b6-644593c4dd44' is not valid. (ln 88 brick.pp)
22:15 purpleidea jackdpeterson: what version of puppet-gluster are you using? git master?
22:16 jackdpeterson yeah, it's going to be very close to master
22:16 jackdpeterson master + my mount param changes
22:17 jackdpeterson # if ("${fsuuid}" != '') and "${fsuuid}" =~ /^[a-f0-9]{8}\-[a-f0-9]{4}\-[a-f0-​9]{4}\-[a-f0-9]{4}\-[a-f0-9]{12}$/ { fail("The chosen fs uuid: '${fsuuid}' is not valid.") }
22:17 jackdpeterson that's the regex and it looks the same as master. What's weird is that when I'm visually scanning it ... the sequences match up and from my view, they should pass!
22:18 purpleidea aha
22:18 jackdpeterson 8,4,4,4,12  ... a-f0-9
22:19 purpleidea run this on your puppet-gluster git tree that you USE in production: 8f046435d2357ec2cd149a3e4bec56fc172bcc33
22:19 purpleidea err
22:19 purpleidea i mean run:
22:19 purpleidea git log | grep 8f046435d2357ec2cd149a3e4bec56fc172bcc33
22:19 roost joined #gluster
22:20 purpleidea and when you're ready, i'll tell you what the issue is
22:20 purpleidea jackdpeterson: ^
22:20 jackdpeterson argh, missing .git on this sucker
22:20 jackdpeterson (installed originally via puppet module install / upgrade) on prod
22:20 bennyturns joined #gluster
22:20 purpleidea jackdpeterson: puppet module install will install an OLD version
22:20 jackdpeterson ah, oi
22:20 jackdpeterson well, that'd do it!
22:21 jackdpeterson alright, let me update my prod repo with correct version and see where that takes me
22:21 purpleidea jackdpeterson: you are hitting an OLD bug, fixed in 8f046435d2357ec2cd149a3e4bec56fc172bcc33 because you're not running code you think you're running
22:22 purpleidea jackdpeterson: i'm confident that will fix your issue. i g2g, but lmk how it goes, get more familiar with git (re: your patch) and send away. cheers!
22:22 jackdpeterson Thanks!
22:23 purpleidea yw
22:23 purpleidea ,,(next)
22:23 glusterbot Another satisfied customer... NEXT!
22:27 jackdpeterson Boom bam, worked like a charm
22:31 hagarth joined #gluster
22:40 glusterbot News from newglusterbugs: [Bug 1105883] Enabling DRC for nfs causes memory leaks and crashes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1105883>
22:55 gildub joined #gluster
23:07 theron joined #gluster
23:22 jackdpeterson @purpleidea -- looks like one additional weirdness came in with the upgrade of the puppet module. UUID keeps resetting to UUID=00000000-0000-0000-0000-000000000000. Then on each puppet run it attempts to set to the correct UUID.
23:24 bala joined #gluster
23:25 social joined #gluster
23:27 Leildin joined #gluster
23:30 ninkotech_ joined #gluster
23:30 ninkotech joined #gluster
23:45 topshare joined #gluster
23:52 topshare joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary