Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-11-07

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:46 bala joined #gluster-dev
02:07 bala joined #gluster-dev
02:35 bala joined #gluster-dev
03:16 kdhananjay joined #gluster-dev
03:43 kshlm joined #gluster-dev
03:52 shubhendu_ joined #gluster-dev
03:56 bharata-rao joined #gluster-dev
04:16 kanagaraj joined #gluster-dev
04:18 hagarth joined #gluster-dev
04:21 nishanth joined #gluster-dev
04:23 atinmu joined #gluster-dev
04:25 itisravi joined #gluster-dev
04:36 anoopcs joined #gluster-dev
04:43 Rafi_kc joined #gluster-dev
04:43 rafi1 joined #gluster-dev
04:43 spandit joined #gluster-dev
04:46 spandit_ joined #gluster-dev
04:55 jiffin joined #gluster-dev
05:00 ndarshan joined #gluster-dev
05:07 lalatenduM joined #gluster-dev
05:11 ppai joined #gluster-dev
05:16 topshare joined #gluster-dev
05:19 soumya_ joined #gluster-dev
05:37 anoopcs joined #gluster-dev
05:39 anoopcs joined #gluster-dev
05:44 hagarth joined #gluster-dev
05:45 anoopcs joined #gluster-dev
06:03 aravindavk joined #gluster-dev
06:10 lalatenduM RaSTar++
06:10 glusterbot lalatenduM: RaSTar's karma is now 1
06:11 kdhananjay joined #gluster-dev
06:11 bala joined #gluster-dev
06:18 lalatenduM RaSTar, I think the summery of https://bugzilla.redhat.co​m/show_bug.cgi?id=1105147 does communicate the issue correctly
06:18 glusterbot Bug 1105147: medium, medium, ---, rtalur, POST , Setting either of user.cifs or user.smb option to enable leads to enabling of smb shares. Enable only when none are disable
06:18 lalatenduM s/does/does not/
06:21 soumya joined #gluster-dev
06:21 nishanth joined #gluster-dev
06:30 ppai joined #gluster-dev
06:41 soumya joined #gluster-dev
06:51 ira joined #gluster-dev
06:56 atinmu joined #gluster-dev
07:03 rgustafs joined #gluster-dev
07:08 kanagaraj joined #gluster-dev
07:12 nishanth joined #gluster-dev
07:14 ndarshan joined #gluster-dev
07:15 aravindavk joined #gluster-dev
07:30 ZhangHuan joined #gluster-dev
07:31 ZhangHuan joined #gluster-dev
07:44 atinmu joined #gluster-dev
08:00 ppai joined #gluster-dev
08:47 vikumar joined #gluster-dev
08:47 Humble lalatenduM++
08:47 glusterbot Humble: lalatenduM's karma is now 40
08:47 Humble thanks
08:52 hagarth joined #gluster-dev
08:55 lalatenduM Humble, :)
08:55 nishanth joined #gluster-dev
08:55 Humble hagarth, lalatenduM will do scratch builds as soon as 3.6.1 is released ..
08:55 Humble i will finish rest ..
08:56 ndevos hagarth: great thread on the sssd/gid-cache bits, but you made me think quite a bit about it!
09:00 lalatenduM ndevos, where is the mail thread , gluster-devel?
09:03 ndarshan joined #gluster-dev
09:07 ndevos lalatenduM: http://supercolony.gluster.org/pipermail​/gluster-devel/2014-November/042793.html
09:10 ndevos lalatenduM: hagarths and my emails to the sssd-list are waiting in moderation, but https://lists.fedorahosted.org/pipermai​l/sssd-devel/2014-November/022202.html is the start of the thread
09:12 lalatenduM ndevos, thans
09:12 lalatenduM thanks*
09:15 shubhendu_ joined #gluster-dev
09:19 ZhangHuan Got a problem when testing with AFR (using replica 2) and samba. When testing with 4 windows write to a samba server (10Gig network), never have > 200MB/s. For DHT, the performance is OK. With some investigation, I found that the setattr takes too much time, average > 10ms
09:19 ZhangHuan Any idea what is wrong?
09:20 ZhangHuan Oh, the setattr time collected from glusterfsd is quite good, average < 1ms.
09:24 atalur joined #gluster-dev
09:36 nishanth joined #gluster-dev
09:48 atinmu joined #gluster-dev
09:48 hagarth joined #gluster-dev
10:12 hagarth ndevos: thanks for copying gluster-devel. seems like a good discussion for involving more folks.
10:15 ndarshan joined #gluster-dev
10:16 nishanth joined #gluster-dev
10:20 ndevos hagarth: yeah, I think we can improve things for our users, and we'll need to think about that a little more
10:22 hagarth ndevos: certainly we can improve there
10:35 shubhendu joined #gluster-dev
10:39 atinmu joined #gluster-dev
11:15 shubhendu joined #gluster-dev
11:37 ndevos JustinClift: do you happen to know what the issue is with http://build.gluster.org/job/racksp​ace-regression-2GB/750/consoleFull ?
11:38 soumya__ joined #gluster-dev
11:45 shubhendu joined #gluster-dev
11:53 rgustafs joined #gluster-dev
12:24 soumya__ joined #gluster-dev
12:58 xavih may anyone assign these bugs to me ? otherwise I can't manage them: 1159498, 1159471, 1158008, 1161588, 1159484, 1159529, 1125312, 1122581, 1140396, 1126734. Thanks :)
13:16 lalatenduM xavih, done, let me know if I missed anything
13:17 lalatenduM xavih, we need to fix the access issue Humble ^^
13:24 ndevos xavih: have you tried if you can assign a bug to yourself with the bugzilla command (from the python-bugzilla package)?
13:25 ndevos xavih: bugzilla login ; bugzilla modify --assignee=<email> 1159498 1159471 ...
13:27 xavih lalatenduM++: thanks :)
13:27 glusterbot xavih: lalatenduM's karma is now 41
13:28 hagarth joined #gluster-dev
13:28 xavih ndevos: I haven't tried that. I'll try it for the next bug. Thanks :)
13:30 ndevos xavih: maybe it'll work, lets see :)
13:31 ndevos I wonder if anyone proposed a talk about Gluster for http://www.librecon.io/en/agenda/
13:31 xavih ndevos: I'll let you know :)
13:31 ndevos xavih: are you (or datalab) attending LibreCon?
13:32 xavih ndevos: I don't think so
13:33 ndevos ah, ok
13:37 edward1 joined #gluster-dev
14:02 xavih ndevos: I've just tried the command, but it doesn't work: Server error: <Fault 115: 'You tried to change the Assignee field from bugs@gluster.org to xhernandez@datalab.es, but only the assignee of the bug, or a user with the required permissions may change that field.'>
14:29 JustinClift ndevos: Looking
14:30 JustinClift ndevos: That's weird.
14:30 JustinClift ndevos: Manu reported the same thing the other day on one of the NetBSD VMs
14:31 JustinClift ndevos: I think his workaround was to precreate the /usr/lib/python2.6/site-packages/gluster directory, owned by the Jenkins user
14:31 JustinClift ndevos: We might need to do that for the slaves now for some reason
14:32 JustinClift Sounds possibly bug-ish
14:32 JustinClift But, I don't have time to look into a proper fix atm
14:32 JustinClift ndevos: I'll log into the slaves and do it now
14:39 shyam joined #gluster-dev
14:40 JustinClift ndevos: It looks like someone's already done it ;)
14:41 JustinClift Cool. :)
15:13 jobewan joined #gluster-dev
15:15 wushudoin joined #gluster-dev
15:26 ndevos JustinClift: done it? create the dir?
15:39 bala joined #gluster-dev
15:41 bala1 joined #gluster-dev
15:47 anoopcs joined #gluster-dev
15:48 soumya__ joined #gluster-dev
15:51 ndevos ai, more gid-cache mails - this is a hairy subject :-/
16:03 shubhendu joined #gluster-dev
16:12 ndevos JustinClift: can you as Jenkins supervisor have a look at the hanging http://build.gluster.org/job/rackspace​-regression-2GB-triggered/2447/console ?
16:17 ira joined #gluster-dev
16:18 kkeithley ndevos: what's wrong with -regression-tests RPM?
16:31 ndevos kkeithley++ for the 3.6.1 email!
16:31 glusterbot ndevos: kkeithley's karma is now 37
16:32 bala joined #gluster-dev
16:32 ndevos kkeithley: if you install and run it, your system will likely be broken afterwards
16:32 ndevos kkeithley: also, after running it, there are artifacts left when uninstalling
16:33 kkeithley ah, right. We ought to zap it from 3.5.x too then
16:34 kkeithley something else to keep _for_fedora_koji_builds for
16:34 ndevos definitely
16:34 davemc Hey, I know I saw something about a gluster log "display?" tool somewhere and now I can't find it. Any clues guys?
16:34 ndevos yes, I thought that was the case, I must have missed that :-/
16:35 ndevos davemc: maybe JustinClifts glusterflow?
16:36 davemc ndevos, tks, will check
16:41 hagarth davemc: are you referring to fluentd plugin for gluster?
16:43 davemc nope, i think it was glusterflow
16:44 davemc glusterflow.org, to be precise
16:45 davemc I had logging analysis on the brain, but glusterflow it is
16:59 jiffin joined #gluster-dev
17:13 jiffin joined #gluster-dev
17:16 jiffin1 joined #gluster-dev
17:24 soumya__ joined #gluster-dev
17:27 ira joined #gluster-dev
17:42 vimal joined #gluster-dev
17:48 jiffin joined #gluster-dev
17:50 ndevos davemc, JustinClift: could one of you check the gluster-devel moderation queue and aprove emails about the "memory cache for initgroups" topic?
17:53 jiffin joined #gluster-dev
17:55 renopt joined #gluster-dev
17:56 davemc on it, as soon as I find if I can
17:59 jiffin joined #gluster-dev
18:00 ndevos thanks davemc, it helps if others can see the responses from the sssd devs, otherwise I'm the only one repluing to them ;-)
18:00 ndevos replying even
18:00 ndevos well, off to have weekend now, cya!
18:01 davemc so far, still searching through the spam
18:01 ndevos :-/
18:01 davemc ndevos, have a good one
18:01 ndevos thanks davemc, enjoy yours too
18:04 jiffin joined #gluster-dev
18:13 davemc ndevos, (even though you are gone, approved
18:16 jiffin1 joined #gluster-dev
18:28 ira joined #gluster-dev
18:31 jiffin joined #gluster-dev
19:29 ira joined #gluster-dev
19:30 jiffin joined #gluster-dev
19:43 lalatenduM joined #gluster-dev
21:04 jiffin joined #gluster-dev
22:10 JustinClift ndevos: No idea why that job running on slave20 was hanging.
22:10 JustinClift Weirdly, I couldn't remotely login to the VM via ssh either.
22:10 JustinClift So, I've aborted that job, and rebooted the slave.
22:10 JustinClift Remote login now seems to work.
22:10 * JustinClift shrugs
22:11 JustinClift Just retriggered it to run again, so lets see if it hangs again
22:59 badone joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary