Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-11-06

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 bala joined #gluster-dev
00:51 bala1 joined #gluster-dev
00:56 kdhananjay joined #gluster-dev
01:14 topshare joined #gluster-dev
01:32 kdhananjay joined #gluster-dev
02:01 kdhananjay joined #gluster-dev
02:23 kdhananjay joined #gluster-dev
02:35 badone joined #gluster-dev
02:47 kdhananjay joined #gluster-dev
03:14 hagarth joined #gluster-dev
03:20 _Bryan_ joined #gluster-dev
03:25 bharata-rao joined #gluster-dev
03:32 shubhendu joined #gluster-dev
03:39 atalur joined #gluster-dev
03:55 kanagaraj joined #gluster-dev
04:04 itisravi joined #gluster-dev
04:05 pranithk joined #gluster-dev
04:08 topshare joined #gluster-dev
04:11 nkhare joined #gluster-dev
04:26 ppai joined #gluster-dev
04:34 anoopcs joined #gluster-dev
04:35 rafi1 joined #gluster-dev
04:35 Rafi_kc joined #gluster-dev
04:43 spandit joined #gluster-dev
04:52 hagarth joined #gluster-dev
04:54 kanagaraj joined #gluster-dev
04:57 jiffin joined #gluster-dev
05:21 lalatenduM joined #gluster-dev
05:28 kshlm joined #gluster-dev
05:47 soumya joined #gluster-dev
05:53 ndarshan joined #gluster-dev
06:31 kdhananjay joined #gluster-dev
06:35 raghu` joined #gluster-dev
06:46 nkhare joined #gluster-dev
06:47 badone joined #gluster-dev
06:48 soumya joined #gluster-dev
06:56 ppai joined #gluster-dev
07:03 topshare joined #gluster-dev
07:04 soumya joined #gluster-dev
07:10 rgustafs joined #gluster-dev
07:30 Humble kkeithley, lalatenduM ndevos 3.4.6beta2 packages are ready at download.g.org . Please cross-check and announce ..
07:30 Humble http://download.gluster.org/pub/glust​er/glusterfs/qa-releases/3.4.6beta2/
07:59 ppai joined #gluster-dev
08:21 nkhare joined #gluster-dev
08:23 lalatenduM Humble, thanks Humble++
08:23 glusterbot lalatenduM: Humble's karma is now 12
08:23 shubhendu_ joined #gluster-dev
08:23 Humble lalatenduM++ U too!!!
08:23 glusterbot Humble: lalatenduM's karma is now 38
08:24 Humble I havent checked the installation works from repos though
08:28 topshare joined #gluster-dev
08:28 ndevos lalatenduM++ Humble++ cool
08:28 glusterbot ndevos: lalatenduM's karma is now 39
08:28 glusterbot ndevos: Humble's karma is now 13
08:29 ndevos Humble: did you try to install on RHEL/CentOS 6.6? please verify http://blog.nixpanic.net/2014/11/insta​lling-glusterfs-34x-35x-or-360-on.html too
08:31 ndevos lol, my blog as 112 visitors today already, its mostly somewhere 30-50 per day
08:32 ndevos I guess that there is some interest to install the packages on 6.6 :)
08:32 Humble ndevos, I am sure its going to be more in coming days
08:33 ndevos lalatenduM, Humble: we should also make sure to include those solutions in our installation guide
08:33 Humble yesterday I helped to answer couple of requests in ovirt list
08:33 Humble today I forwarded the blog entry to subjected people
08:33 ndevos oh, cool
08:33 Humble so they will use that blog for successive discussions :)
08:34 Humble there had a ovirt release recently and people are struggling ..
08:34 Humble so better that we put it in a log
08:34 Humble log/blog
08:35 ndevos Humble: oh, can we install oVirt Hosted Engine with the setup-tool on a glusterfs mount? or is really nfs/iscsi the only option?
08:35 Humble ndevos++
08:35 glusterbot Humble: ndevos's karma is now 47
08:36 hagarth ndevos: nfs/iscsi is the only option I think. we need to check with them.. I am also kind of surprised about the complaints on split-brain in that thread. would be useful to understand how that happens...
08:37 ndevos hagarth: oh, I'm not following any threads... Jason pinged me on IRC and explained the need for using NFS on his Gluster servers
08:37 Humble ndevos, afik, its only nfs/iscsi.. gluster support is still not in
08:37 hagarth ndevos: remember reading it in ovirt users yesterday .. about options for Hosted Engine
08:38 ndevos Humble: or something like a 'clustered filesystem mountpoint' where users can just specify a directory they setup themselves?
08:39 Humble need to check on that.. atm I am not sure..
08:40 ndevos okay, I was thinking of asking around myself, but now I remembered that you are a very nice candidate to take that AI on :D
08:40 ndevos Humble++ ;)
08:40 glusterbot ndevos: Humble's karma is now 14
08:40 Humble hagarth++ :)
08:40 glusterbot Humble: hagarth's karma is now 20
08:40 Humble ndevos++ :)
08:40 glusterbot Humble: ndevos's karma is now 48
08:41 Humble ndevos, I hope we are not in a meeting channel :)
08:43 shubhendu_ joined #gluster-dev
08:45 atinmu joined #gluster-dev
08:46 vikumar joined #gluster-dev
08:51 ndevos #action Humble find out if oVirt Hosted Engine can use a non nfs/iscsi storage, like a user-configured mountpoint
08:51 * ndevos tries
08:54 Humble hagarth++ thanks
08:54 glusterbot Humble: hagarth's karma is now 21
08:57 nishanth joined #gluster-dev
09:08 vikumar joined #gluster-dev
09:13 pranithk joined #gluster-dev
09:14 pranithk ndevos: Thanks for re-triggering the build :-). ndevos++
09:14 glusterbot pranithk: ndevos's karma is now 49
09:16 ndevos pranithk: np!
09:16 atinmu joined #gluster-dev
09:19 shubhendu_ joined #gluster-dev
09:30 lalatenduM ndevos++ for the blog
09:30 glusterbot lalatenduM: ndevos's karma is now 50
09:31 lalatenduM ndevos, one comment regrading "The most prominent issue is that the glusterfs package from RHEL has a version of 3.6.0.28" line in the blog , The version of glusterfs client packages in EL 6.6. is 9
09:31 lalatenduM oops, I mean 3.6.0.29-2
09:32 ndevos lalatenduM: ah, really?
09:32 lalatenduM ndevos, yeah
09:33 ndevos lalatenduM: right, rhel-6-server-htb-rpms contain .28
09:33 * ndevos seems to be a High Touch Beta customer?
09:33 lalatenduM ndevos, :)
09:34 ndevos lalatenduM: maybe that was the version on 6.6 GA, and .29 was an update?
09:35 lalatenduM ndevos, 29-2 was part of 6.6 GA
09:36 ndevos lalatenduM: no, my rhel-6.6 dvd has /mnt/Packages/glusterfs-3.6.0.28-2.el6.x86_64.rpm
09:36 lalatenduM ndevos, I am sure you have a pre GA ISO
09:37 * lalatenduM is confident of it :)
09:38 ndevos lalatenduM: I downloaded it on tuesday from RHS
09:38 ndevos uh, RHN
09:50 ppai joined #gluster-dev
09:53 lalatenduM ndevos, ok, 28-2 is in GA iso and 29-2 is available as an update :0)
10:26 ppai joined #gluster-dev
10:27 topshare joined #gluster-dev
10:27 hagarth xavih: ping, there have been a cpl of ec bugs logged today. did you happen to notice those errors?
10:28 Humble ndevos, hagarth kkeithley requested tags 3.4.7 , 3.5.4, 3.6.2 are present for GlusterFS product now..
10:28 ndevos Humble++ thanks!
10:28 glusterbot ndevos: Humble's karma is now 15
10:28 hagarth Humble++ thanks!
10:29 glusterbot hagarth: Humble's karma is now 16
10:29 Humble np!
10:29 ndevos hagarth: are you planning to include a backport of http://review.gluster.org/#/c/9036 in 3.6.1?
10:29 ndevos -> the symbol-versioning one
10:29 hagarth ndevos: yes
10:30 ndevos hagarth: okay, cool
10:30 * ndevos tests the last update in master now, will do the 3.6 version afterwards
10:31 ndevos that would be http://review.gluster.org/9055 , in case others want to have a go at the 3.6 version
10:43 aravindavk joined #gluster-dev
10:57 kdhananjay joined #gluster-dev
11:05 ppai joined #gluster-dev
11:06 soumya_ joined #gluster-dev
11:18 ndarshan joined #gluster-dev
11:19 xavih hagarth: I'm working on those errors
11:19 pranithk joined #gluster-dev
11:20 shubhendu_ joined #gluster-dev
11:26 rgustafs joined #gluster-dev
11:33 krishnan_p joined #gluster-dev
11:33 tg2 joined #gluster-dev
11:36 hagarth xavih: cool, thanks
11:38 kkeithley1 joined #gluster-dev
11:44 pranithk joined #gluster-dev
11:55 pranithk left #gluster-dev
11:58 edward1 joined #gluster-dev
12:00 soumya_ joined #gluster-dev
12:02 jdarcy joined #gluster-dev
12:07 shyam joined #gluster-dev
12:08 topshare joined #gluster-dev
12:09 ndarshan joined #gluster-dev
12:09 shubhendu_ joined #gluster-dev
12:16 krishnan_p joined #gluster-dev
12:28 ppai joined #gluster-dev
12:29 itisravi_ joined #gluster-dev
12:56 hagarth joined #gluster-dev
12:57 jdarcy joined #gluster-dev
13:02 hagarth JustinClift: ping, attending the meeting?
13:11 topshare joined #gluster-dev
13:12 Humble hagarth++
13:12 glusterbot Humble: hagarth's karma is now 22
13:13 shubhendu_ joined #gluster-dev
13:18 topshare joined #gluster-dev
13:42 topshare joined #gluster-dev
13:59 shyam joined #gluster-dev
14:07 topshare joined #gluster-dev
14:07 JustinClift hagarth: Arrgh.  Totally forgot.
14:08 JustinClift I really need to add that to my reminders. :(
14:08 * ndevos shakes his fist at ./tests/basic/quota-anon-fd-nfs.t _o*
14:21 aravindavk joined #gluster-dev
14:35 _nixpanic joined #gluster-dev
14:35 _nixpanic joined #gluster-dev
14:43 bala joined #gluster-dev
14:48 jobewan joined #gluster-dev
14:49 itisravi joined #gluster-dev
15:15 _Bryan_ joined #gluster-dev
15:26 soumya_ joined #gluster-dev
15:43 JustinClift kkeithley_: ping
15:43 glusterbot JustinClift: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
15:44 JustinClift glusterbot: FOAD
15:44 JustinClift kkeithley_: ping
15:44 glusterbot JustinClift: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
15:44 JustinClift kkeithley_: ping ping fucking ping :p
15:44 JustinClift kkeithley_: Don't suppose you have any CX4 - CX4 cables?
15:45 * JustinClift will happily swap the CX4 hybrid ones. ;)
16:26 kkeithley_ JustinClift: sorry, I don't
16:26 JustinClift kkeithley_: Np.  Will do the china order thing. :)
16:27 JustinClift kkeithley_: Want me to post back the cables, or can I hang onto them?
16:27 JustinClift (me has a potential use for them, but not right away)
16:27 * kkeithley_ wonders if he didn't press send on the email?
16:28 kkeithley_ If you need a couple (or a few) take them. Send back what you don't keep. I have some more of the old cards in other machines that I'll shuffle around
16:28 JustinClift Yeah, I don't think you did
16:28 * JustinClift checks email again
16:28 JustinClift Cool.  I'll snag a few of them, and send back the rest :)
16:37 kkeithley_ hmm, it's in my Sent mailbox....
16:38 kkeithley_ anyway
16:46 JustinClift kkeithley_: You're right.  It was in my Inbox after all, just not marked as new mail.
16:46 JustinClift I've been having VPN issues for the last few hours, so guessing something to do with mail client being disconnected wrongly or something
16:46 JustinClift Sorry for the noise. ;)
16:47 kkeithley_ no prob.
16:47 JustinClift kkeithley_: And yeah, the switch is a 20Gb/s one, not a 40 ;)
16:47 JustinClift http://www.ebay.co.uk/itm/301301152695
16:50 hagarth joined #gluster-dev
16:56 nishanth joined #gluster-dev
17:11 hagarth JustinClift: no worries, will send the meeting logs to you and davemc
17:21 kanagaraj joined #gluster-dev
17:22 JustinClift hagarth: Tx :)
17:27 ndevos aaah! meeting logs, I completely forgot to send the logs of the Bug Triage meeting :-/
17:29 hagarth ndevos: are there any known problems with gid-cache on the server?
17:30 ndevos hagarth: not known to me, why?
17:31 ndevos there *were* issues with it once, and that showed that disabling gid-cache was also not possible, but that got fixed
17:32 hagarth ndevos: with server.manage-gids enabled, sssd is being kept quite busy
17:33 hagarth I thought that we would need to hit getgrouplist() and ssd only if there was a gid cache miss
17:33 ndevos yeah, I would expect that
17:34 ndevos otherwise it does not make sense to do any caching...
17:34 hagarth ndevos: we do not have any stats for cache miss right?
17:34 ndevos hagarth: no, I do not think so
17:34 hagarth ndevos: let me loop you into that thread
17:34 ndevos well, I'm pretty confident that we dont have that :)
17:35 ndevos sure, more emails \o/
17:35 hagarth ndevos: ok :)
17:35 hagarth yay, emails FTW :D
17:36 JustinClift hagarth: Ahhh, more Corvid Tech progress then?
17:37 * JustinClift isn't seeing any emails about it yet
17:37 JustinClift But, it could be the VPN + Claws mail not playing well together atm
17:40 hagarth JustinClift: I have been intending to check this with ndevos, but somehow this eluded me till now.
17:41 JustinClift ;)
17:41 JustinClift np
17:43 JustinClift The sssh guys were saying they'd investigate on their side, but I haven't heard back from them yet. :(
17:43 JustinClift s/sssh/sssd/
17:45 hagarth JustinClift: think I have an email from one of the sssd developers about what they are doing
17:47 JustinClift Cool
18:11 davemc joined #gluster-dev
18:13 lalatenduM joined #gluster-dev
18:13 davemc Hey folks. I'd relly like to see if we could submit some glusterfs talks to Vault, http://events.linuxfoundation.org/events/vault
18:16 ndevos yes, we really should!
18:16 davemc I think an interesting discussion on distributed small file perf would be killer
18:18 hagarth davemc: I would like to see us having addressed the problem by Vault :)
18:23 davemc hagarth, or at least presenting differnet ideas for response and discussion
18:25 hagarth davemc: yes, absolutely. sounds like a great topic for discussion.
18:49 lalatenduM joined #gluster-dev
18:52 davemc hagarth, was the 4.0 meeting today (Thursday my time)?
18:52 davemc if so, apologies. thought it ws tomorrow for some reason
18:53 hagarth davemc: no worries, we discussed whatever we could. will send out details shortly.
18:53 davemc tks. Had just gotten back from 19 hours of travel. slept in for some reason
18:55 hagarth davemc: hope it wasn't 19 hours in the air!
18:56 davemc thank goodness no.
18:56 hagarth I get to do that when I visit the west coast :)
18:56 davemc I get that.  that's a long trip
19:00 hagarth davemc: speaking of long trips, you should plan a trip to BLR sometime soon.
19:02 davemc hagarth, JAn with a goal of a design summit maybe?
19:02 hagarth davemc: possibly, yes.
19:05 lalatenduM joined #gluster-dev
19:26 davemc Gluster Use survey closes tomorrow, Friday, 7-November. Last chance: https://www.surveymonkey.com/s/DLN7MQX
19:33 lalatenduM joined #gluster-dev
19:47 lalatenduM kkeithley_, kkeithley the rpms for 3.4.6beta2 are ready at d.g.o , Humble has mentioned it before, but did not see any announcement i.e. blog etc, so though of telling u again
19:55 kkeithley_ okay. hmmm, I looked in the wrong place when I checked earlier today.
19:56 kkeithley_ thanks
19:58 lalatenduM kkeithley, np
21:17 ndevos hmm, hagarts and my post are waiting in the moderation queue for sssd-devel@ ... just pinged in #sssd about it
21:17 ndevos JustinClift: ^
21:25 JustinClift Cool. :)
21:25 JustinClift It's interesting what problems show up when getting things tested "at scale". ;)
21:30 JustinClift davemc: Added your @gluster.org address to the automatic-accept list for @announce and um... gluster-devel I think it was
21:31 JustinClift They shouldn't get held up in moderation queue any more
21:35 davemc tks. Finally got around to creating the identity, while trying to ignore the presentation for BU planning
21:46 ndevos JustinClift: http://paste.fedoraproject.org/148526/31025614 contains my reply, maybe it helps?
21:47 * ndevos isnt aware of the actual issue, but hopes he explained the usage of getgrouplist() well enough
21:52 * JustinClift wonders if the "gluster volume set <VOLUME> server.gid-timeout 30" option would help Corvid Tech
21:52 JustinClift Hmmm, when your posts get through moderation we can point it out to them
21:53 * JustinClift has been reading the email trail, but doesn't want to do much with this other than ensure the right people for solving it are involved :)
21:58 badone joined #gluster-dev
21:59 ndevos JustinClift: I'm not on any emails related to problem, that ^ one is a reply to the only mail I got
22:00 ndevos and, that is *not* an invite to fill my inbox
22:06 JustinClift Heh
22:08 JustinClift I didn't know you were the right person to connect in.  It started out as a sssd problem, so I asked Pranith, then got the sssd guys in, who investigated and determined it's due to getgrouplist()
22:08 JustinClift So, then I asked Vijay (etc)
22:08 JustinClift ;)
22:09 ndevos whatever works :)
22:09 ndevos I've written to the sssd list and if that helps someone, I'm happy to hear that
22:11 ndevos it could be that the default timeout for server.gid-timeout is very low for most environments, increasing it should not often be a problem, some environments allow up to 30 minutes of group caching, I think
22:13 ndevos it's actually quite interesting, if you have the groups in LDAP or something, you will want to keep the cache a little longer, I'd say
22:13 ndevos otherwise *many* GlusterFS operations would contact sssd and get delayed because of that
22:14 ndevos thats something you could blog about, JustinClift ;-)
22:14 * ndevos leaves for the day, cya!
22:53 shyam joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary