Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 edwardm61 joined #gluster
00:06 eka joined #gluster
00:38 iPancreas joined #gluster
01:21 ron-slc joined #gluster
01:22 corretico joined #gluster
01:31 MattJ_NZ joined #gluster
01:37 lalatenduM joined #gluster
01:39 iPancreas joined #gluster
01:45 B21956 left #gluster
02:03 harish joined #gluster
02:08 haomaiwa_ joined #gluster
02:16 RameshN joined #gluster
02:19 DV joined #gluster
02:21 gem joined #gluster
02:22 eka joined #gluster
02:25 meghanam joined #gluster
02:37 chirino joined #gluster
02:39 iPancreas joined #gluster
02:40 eka joined #gluster
02:45 _Bryan_ joined #gluster
03:07 nrcpts joined #gluster
03:07 plarsen joined #gluster
03:10 systemonkey joined #gluster
03:21 sputnik13 joined #gluster
03:25 vimal joined #gluster
03:33 RameshN joined #gluster
03:34 bharata-rao joined #gluster
03:40 iPancreas joined #gluster
03:43 kanagaraj joined #gluster
03:51 suman_d joined #gluster
03:54 itisravi joined #gluster
03:56 ppai joined #gluster
03:59 htrmeira joined #gluster
04:02 RameshN joined #gluster
04:03 nrcpts joined #gluster
04:09 hagarth joined #gluster
04:10 shubhendu joined #gluster
04:16 Manikandan joined #gluster
04:17 PaulCuzner joined #gluster
04:18 atinmu joined #gluster
04:18 gem joined #gluster
04:19 rafi joined #gluster
04:22 nbalacha joined #gluster
04:25 soumya joined #gluster
04:27 DJClean joined #gluster
04:28 suman_d joined #gluster
04:29 nishanth joined #gluster
04:39 elico joined #gluster
04:40 iPancreas joined #gluster
04:45 anoopcs joined #gluster
04:48 lalatenduM joined #gluster
04:52 saurabh joined #gluster
04:55 gem joined #gluster
04:59 marcoceppi joined #gluster
04:59 marcoceppi joined #gluster
04:59 bala joined #gluster
05:00 prasanth_ joined #gluster
05:01 kdhananjay joined #gluster
05:04 marcoceppi_ joined #gluster
05:13 ndarshan joined #gluster
05:18 suman_d joined #gluster
05:19 sahina joined #gluster
05:21 kshlm joined #gluster
05:25 suman_d joined #gluster
05:31 raghu joined #gluster
05:37 anil joined #gluster
05:38 smohan joined #gluster
05:39 hagarth joined #gluster
05:41 iPancreas joined #gluster
05:46 overclk joined #gluster
05:49 dusmant joined #gluster
05:54 ramteid joined #gluster
06:01 kshlm joined #gluster
06:13 hchiramm_ joined #gluster
06:15 jiffin joined #gluster
06:16 hagarth joined #gluster
06:22 glusterbot News from newglusterbugs: [Bug 1166020] self-heal-algorithm with option "full" doesn't heal sparse files correctly <https://bugzilla.redhat.co​m/show_bug.cgi?id=1166020>
06:25 rjoseph joined #gluster
06:27 kdhananjay joined #gluster
06:30 Philambdo joined #gluster
06:35 nrcpts joined #gluster
06:41 iPancreas joined #gluster
06:45 atalur joined #gluster
07:12 rgustafs joined #gluster
07:19 atalur joined #gluster
07:19 ctria joined #gluster
07:26 jtux joined #gluster
07:26 eightyeight joined #gluster
07:27 sputnik13 joined #gluster
07:27 hagarth joined #gluster
07:34 Philambdo joined #gluster
07:34 nangthang joined #gluster
07:34 lyang0 joined #gluster
07:36 PaulCuzner left #gluster
07:39 sputnik13 joined #gluster
07:42 iPancreas joined #gluster
07:42 Fen2 joined #gluster
07:52 atalur joined #gluster
07:53 LebedevRI joined #gluster
07:55 nangthang joined #gluster
08:00 fandi joined #gluster
08:09 hagarth joined #gluster
08:18 sputnik13 joined #gluster
08:22 sakshi joined #gluster
08:23 Manikandan joined #gluster
08:24 fsimonce joined #gluster
08:26 deniszh joined #gluster
08:36 meghanam joined #gluster
08:36 soumya joined #gluster
08:41 anil joined #gluster
08:42 kovshenin joined #gluster
08:42 iPancreas joined #gluster
08:48 dusmant joined #gluster
08:49 Slashman joined #gluster
08:50 hagarth joined #gluster
08:50 kdhananjay joined #gluster
08:52 rjoseph joined #gluster
08:54 SOLDIERz joined #gluster
09:01 [Enrico] joined #gluster
09:02 Norky joined #gluster
09:03 kaushal_ joined #gluster
09:04 elico joined #gluster
09:04 T0aD joined #gluster
09:10 nishanth joined #gluster
09:10 mbukatov joined #gluster
09:11 kshlm joined #gluster
09:13 pcaruana joined #gluster
09:14 mrEriksson joined #gluster
09:18 Dw_Sn joined #gluster
09:19 atinmu joined #gluster
09:19 lkthomas joined #gluster
09:19 lkthomas hey guys
09:19 lkthomas for geo-relication on 3.5, does interval of sync change to 15 seconds ?
09:22 nangthang joined #gluster
09:23 Dw_Sn_ joined #gluster
09:23 glusterbot News from resolvedglusterbugs: [Bug 1049727] Dist-geo-rep : volume won't be able to stop untill the geo-rep session is deleted. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049727>
09:42 spandit joined #gluster
09:43 iPancreas joined #gluster
09:45 soumya joined #gluster
09:53 glusterbot News from newglusterbugs: [Bug 1175745] AFR + Snapshot : Read operation on  file in split-brain is successful in USS <https://bugzilla.redhat.co​m/show_bug.cgi?id=1175745>
09:53 glusterbot News from newglusterbugs: [Bug 1179663] CIFS:[USS]: glusterfsd OOM killed when 255 snapshots were browsed at CIFS mount and Control+C is issued <https://bugzilla.redhat.co​m/show_bug.cgi?id=1179663>
09:53 glusterbot News from newglusterbugs: [Bug 1179658] Add brick fails if parent dir of new brick and existing brick is same and volume was accessed using libgfapi and smb. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1179658>
09:53 glusterbot News from newglusterbugs: [Bug 1179659] AFR + Snapshot : Read operation on  file in split-brain is successful in USS <https://bugzilla.redhat.co​m/show_bug.cgi?id=1179659>
09:55 Manikandan joined #gluster
10:00 aravindavk joined #gluster
10:13 kumar joined #gluster
10:15 kshlm joined #gluster
10:22 meghanam joined #gluster
10:25 lkthomas guys, I am trying to push-pem to repl2 host, but it saying passwordless ssh login has not been setup
10:27 harish joined #gluster
10:30 atinmu joined #gluster
10:35 lkthomas anyone still around ?
10:37 hagarth joined #gluster
10:40 badone joined #gluster
10:43 iPancreas joined #gluster
10:45 vimal joined #gluster
10:48 Dw_Sn joined #gluster
11:20 elico left #gluster
11:31 rafi1 joined #gluster
11:40 smohan_ joined #gluster
11:44 iPancreas joined #gluster
11:47 kkeithley1 joined #gluster
11:48 hagarth joined #gluster
11:49 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
11:52 purpleidea joined #gluster
11:54 kdhananjay joined #gluster
11:58 purpleidea joined #gluster
12:04 ppai joined #gluster
12:04 lpabon joined #gluster
12:07 jdarcy joined #gluster
12:08 purpleidea joined #gluster
12:12 rjoseph joined #gluster
12:18 itisravi_ joined #gluster
12:18 jiffin joined #gluster
12:21 elico joined #gluster
12:30 rgustafs joined #gluster
12:35 ira joined #gluster
12:36 ira joined #gluster
12:44 iPancreas joined #gluster
12:46 nshaikh joined #gluster
12:50 fandi joined #gluster
13:01 Slashman_ joined #gluster
13:02 Fen1 joined #gluster
13:10 jdarcy joined #gluster
13:15 rjoseph joined #gluster
13:17 rafi1 joined #gluster
13:19 karnan joined #gluster
13:21 anoopcs joined #gluster
13:22 bene joined #gluster
13:25 calisto joined #gluster
13:36 elico joined #gluster
13:40 B21956 joined #gluster
13:42 Dw_Sn joined #gluster
13:42 rafi1 joined #gluster
13:43 seblo joined #gluster
13:44 sputnik13 joined #gluster
13:45 iPancreas joined #gluster
13:50 nbalacha joined #gluster
13:53 harish joined #gluster
13:55 ppai joined #gluster
13:57 tdasilva joined #gluster
13:57 shubhendu joined #gluster
13:59 julim joined #gluster
14:01 elico1 joined #gluster
14:07 dusmant joined #gluster
14:08 aravindavk joined #gluster
14:10 virusuy joined #gluster
14:10 virusuy joined #gluster
14:12 ekuric joined #gluster
14:15 T3 joined #gluster
14:16 lmickh joined #gluster
14:17 elico joined #gluster
14:24 glusterbot News from newglusterbugs: [Bug 1175739] [USS]: Non root user who has no access to a directory, from NFS mount, is able to access the files under .snaps under that directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1175739>
14:24 glusterbot News from newglusterbugs: [Bug 1175744] [USS]: Unable to access .snaps after snapshot restore after directories were deleted and recreated <https://bugzilla.redhat.co​m/show_bug.cgi?id=1175744>
14:24 glusterbot News from newglusterbugs: [Bug 1175752] [USS]: On a successful lookup, snapd logs are filled with Warnings "dict OR key (entry-point) is NULL" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1175752>
14:24 glusterbot News from newglusterbugs: [Bug 1175758] [USS] : Rebalance process tries to connect to snapd and in case when snapd crashes it might affect rebalance process <https://bugzilla.redhat.co​m/show_bug.cgi?id=1175758>
14:24 glusterbot News from newglusterbugs: [Bug 1175742] [USS]: browsing .snaps directory with CIFS fails with "Invalid argument" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1175742>
14:26 glube joined #gluster
14:27 glube Hello all, can spmeone here help me with mandactory lock on glusterfs volume
14:29 glube anyone there?
14:29 sickness me =)_
14:30 glube helooooooo i can do with a bit of help please
14:30 sickness I'm sorry but I'm a noob and just tried to compile glusterfs, never actually used it =_)
14:31 DV joined #gluster
14:31 glube no worries thanks sickness
14:32 sickness yw :_)
14:32 pdrakeweb joined #gluster
14:37 gothos hey, is there some preferred way to set up a virtual gluster testing environment?
14:39 glube you can install two linux instanes and install gluster on top
14:42 lpabon joined #gluster
14:45 iPancreas joined #gluster
14:52 dberry joined #gluster
14:53 ndevos gothos: I think everyone has their own preference :)
14:53 dberry joined #gluster
14:53 ndevos puppet-gluster is one way to do it
14:53 ndevos and there are ansible modules for lvm and gluster too, so that should be easy to use (which is what I will try out soon)
14:54 ndevos and, maybe vagrant? https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/
14:58 gothos ndevos: tbh, I would very much like to avoid puppet, especially since we are using salt around here
14:58 neofob joined #gluster
14:58 gothos so I guess I'll use the vagrant script and modify it to use salt
14:58 * ndevos has no idea what salt exactly is, but it seems to be hot
14:59 ndevos @lucky salt
14:59 glusterbot ndevos: http://en.wikipedia.org/wiki/Salt
14:59 Dw_Sn joined #gluster
15:00 neofob left #gluster
15:11 virusuy joined #gluster
15:11 virusuy joined #gluster
15:12 _Bryan_ joined #gluster
15:21 bala joined #gluster
15:29 plarsen joined #gluster
15:29 plarsen joined #gluster
15:32 sputnik13 joined #gluster
15:39 prasanth_ joined #gluster
15:45 n-st joined #gluster
15:46 iPancreas joined #gluster
15:49 dgandhi joined #gluster
15:52 jobewan joined #gluster
15:54 deepakcs joined #gluster
15:56 saurabh joined #gluster
15:57 plarsen joined #gluster
15:57 soumya joined #gluster
15:59 iPancreas joined #gluster
16:05 sputnik13 joined #gluster
16:08 calisto joined #gluster
16:12 zutto hmh.. so upgraded from 3.4.0alpha to 3.5.2, ext4 bug all gone
16:19 roost joined #gluster
16:20 Guest30201 joined #gluster
16:23 smohan joined #gluster
16:24 jmarley joined #gluster
16:25 nishanth joined #gluster
16:35 meghanam joined #gluster
16:36 prasanth_ joined #gluster
16:40 kke joined #gluster
16:46 Intensity joined #gluster
16:48 elico joined #gluster
16:50 soumya joined #gluster
16:51 T3 joined #gluster
16:53 rafi1 joined #gluster
17:03 RameshN joined #gluster
17:09 calisto1 joined #gluster
17:15 virusuy joined #gluster
17:15 virusuy joined #gluster
17:31 neofob joined #gluster
17:32 SOLDIERz joined #gluster
17:45 semiosis glusterbot: whoami
17:45 glusterbot semiosis: I don't recognize you. You can message me either of these two commands: "user identify <username> <password>" to log in or "user register <username> <password>" to register.
17:48 lkoranda joined #gluster
17:51 virusuy joined #gluster
18:00 bene2 joined #gluster
18:07 sputnik13 joined #gluster
18:08 nueces joined #gluster
18:25 lalatenduM joined #gluster
18:29 ira joined #gluster
18:41 semiosis currently watching jdarcy talk about glusterfs 4.0... http://redhatstorage.redhat.com/2015/​01/07/glusterfs-4-0-bold-new-release/
18:43 suman_d joined #gluster
18:49 semiosis he just called 4.0 "a real game changer"
19:01 M28_ joined #gluster
19:08 jonb1 joined #gluster
19:10 M28___ joined #gluster
19:21 bennyturns joined #gluster
19:26 Hamburglr joined #gluster
19:31 Hamburglr I'm having trouble getting a 2 node gluster setup to sync files. I've followed the Basic Gluster Troubleshooting page and everything seems fine. Volume is Started and peer status is Connected. any ideas where to look?
19:35 T0aD joined #gluster
19:40 gothos have you checked the logs?
19:45 Hamburglr @gothos it says 0-sc12-www-client-1: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
19:46 Hamburglr and when I run gluster volume status the bricks both show as online
19:51 sputnik13 joined #gluster
20:03 bennyturns joined #gluster
20:08 calisto joined #gluster
20:28 DV joined #gluster
20:34 partner JoeJulian: btw the fix-layout indeed fixed the log entries on the client side (mismatch and what not), only took 33 days to complete, just in time before physical moves of the servers
20:35 JoeJulian I hope that's a good thing.
20:36 partner sure it is.
20:36 partner thanks for the tip. for some reason i've negleted running the fix-layout as the files start to flow to new brick always
20:36 partner also my directory structure was twice as big earlier before separated some data from that volume
20:36 fandi joined #gluster
20:37 partner so possibly it would have taken 2 months to complete and i often added more bricks during that time so it was kind of no-win situation
20:38 partner and as the old volumes were mostly full there was no room for new files anyways so whatever the hash would say it would be elsewhere most likely
20:38 partner i guess i'm doing it all wrong but at least it has not failed so far..
20:41 JoeJulian That's all that counts.
20:41 JoeJulian The best part about being on the bleeding edge is you can't do it wrong.
20:43 partner weird thing happened today. a colleague accidentially managed to shut down one of the storage servers and at the very same moment one of the glusterd daemons died elsewhere on the cluster
20:44 partner first log entry on the rebooted box was 09:18:12, it was all up at 09:20:26
20:45 partner at 09:19:44 one box got: kernel: [8970308.467184] Killed process 3052 (glusterd) total-vm:9197264kB, anon-rss:7572384kB, file-rss:212kB
20:45 partner must be related, just no idea how/why..
20:47 partner only thing common is one distributed volume to which both boxes provide bricks
20:53 partner log hints some lock could not be get from the box that was rebooted, bunch of error terror around that and then nothing until the daemon got restarted
20:56 partner based on the graphs there was 5 gigs of cache memory and nothing that would have looked like a leak, apps roughly 2 gigs
20:56 partner 3.4.5 still running here, "just fyi"
21:06 iPancreas joined #gluster
21:07 DV joined #gluster
21:07 Telsin so I tried upgrading a 3.5.3 to 3.6.1 and got bit by https://bugzilla.redhat.co​m/show_bug.cgi?id=1173909
21:07 glusterbot Bug 1173909: medium, medium, ---, bugs, NEW , glusterd crash after upgrade from 3.5.2
21:08 Telsin unfortunately, rolling back to 3.5.3, I'm still getting the same crash
21:08 nueces joined #gluster
21:08 Telsin any ideas on how to recover/reset whatever is triggering that besides removing and remaking the affected brick?
21:11 JoeJulian Did you stop and restart the brick after the rollback?
21:11 JoeJulian Stopping glusterd (glusterfs-server) doesn't do that.
21:12 Telsin yes, stopped everything before rolling back. also rebooted
21:15 Telsin interestingly, the bricks are still running, it's just glusterd mgmt that won't
21:16 JoeJulian Oh, that's not the same bug then,
21:16 JoeJulian run "glusterd -d" and see if that makes the problem any clearer.
21:16 Telsin that's how I got this far, my logs are identical to the ones reported for that bug
21:17 JoeJulian Please fpaste the output of glusterd -d
21:18 JoeJulian If you're using an rpm based distro, you can "yum install fpaste; glusterd -d 2>&1 | fpaste" and be as lazy as I am.
21:19 Telsin I'm not running under gdb, but it's crashing in glusterd-op-sm
21:20 Telsin nice, coming right up
21:23 Telsin lol whups: http://paste.fedoraproject.org/166956/42066566
21:24 JoeJulian glusterd_add_brick_detail_to_dict isn't run unless you run volume status.
21:24 Telsin try this one: http://paste.fedoraproject.org/166959/06658311
21:24 Telsin and yes, that's what crashes it.
21:24 Telsin generally, because the volume is being used by ovirt
21:24 Telsin bu specifically in that last case because tried "glstuer vol status exports detail'
21:24 JoeJulian Oh, I thought you were saying that glusterd wouldn't start.
21:25 JoeJulian Ok, then it is that bug. :P
21:25 Telsin ah, no, it starts, then dies quickly thereafter ;)
21:25 glusterbot News from newglusterbugs: [Bug 1173909] glusterd crash after upgrade from 3.5.2 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1173909>
21:27 JoeJulian The logic was broken between 3.5.2 and 3.5.3. Roll back to 3.5.2.
21:27 Telsin ah, just notied that my working node was 3.5.2 and not 3.5.3, yes.
21:27 Telsin thanks, I'll try that
21:28 JoeJulian I'll update the bug report.
21:29 SOLDIERz joined #gluster
21:35 theron_ joined #gluster
21:38 Telsin looks happier, thanks for the help
21:41 Arminder joined #gluster
21:42 iPancreas joined #gluster
21:42 sputnik13 joined #gluster
21:52 eka getting not found http://www.gluster.org/community/doc​umentation/Getting_started_configure
21:53 systemonkey joined #gluster
21:53 n-st joined #gluster
21:54 partner the URL is lacking index.php from the middle, proper would be http://www.gluster.org/community/document​ation/index.php/Getting_started_configure
21:54 partner eka: where did that link come from?
21:59 bene joined #gluster
22:02 eka partner: let me check again
22:02 eka partner: http://www.gluster.org/document​ation/Getting_started_overview/
22:04 badone joined #gluster
22:09 partner hmm i wonder who handles that area, hchiramm_ and lalatenduM were today mentioned on the meeting (or its aftermatch) for the documentation part..
22:12 badone joined #gluster
22:13 JoeJulian git clone git@forge.gluster.org:gluster-site/gluster-site.git
22:14 JoeJulian thought I can't find that link there either.
22:21 PeterA joined #gluster
22:21 semiosis JoeJulian: pm
22:22 semiosis @seen jimjag
22:22 glusterbot semiosis: I have not seen jimjag.
22:24 Arminder joined #gluster
22:29 Arminder- joined #gluster
22:59 Guest38768 joined #gluster
23:01 M28_ joined #gluster
23:06 deniszh joined #gluster
23:16 deniszh joined #gluster
23:28 partner any particular reason why rebalance for some certain volume results rebalance being running on all the hosts, even the ones that are on peer but have nothing to do with that volume?
23:28 JoeJulian In order to spread the load and theoretically complete the rebalance faster.
23:29 partner so any and every hosts in the peer are crawling the volume despite not serving any of the bricks of the volume?
23:29 JoeJulian Supposed to be only crawling a portion of it. I haven't looked at how the coordination is done.
23:31 partner hmm, interesting, i wasn't expecting this to happen. was just wondering why the new servers in new datacenter are participating in the effort
23:32 partner i mean, fair assumption is to think only the hosts serving the brick would be part of the process
23:32 JoeJulian Unless something is cpu intensive, I would agree.
23:34 partner i wonder if it will speed up a lot if i throw in couple of dozen of virtual machines to the peer :o
23:35 marcoceppi joined #gluster
23:40 gildub joined #gluster
23:42 partner i don't think there's much point to rebalance this sized of volume, only reason to run it is to free up space on the bricks that went "full" over the limits at the point the volume was low on capacity
23:43 JoeJulian I agree
23:43 partner estimated duration was 298 days based on calculations from log entries for already processed dirs..
23:44 partner and given the memory leaks..
23:47 partner well, nice, not a day goes by without learning something new about the glusterfs :)
23:57 plarsen joined #gluster
23:57 dkorzhev1 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary