Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-05-10

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:44 kkeithley joined #gluster-dev
00:45 ndk joined #gluster-dev
01:31 EinstCrazy joined #gluster-dev
01:41 rafi joined #gluster-dev
01:58 lpabon joined #gluster-dev
02:02 ndk joined #gluster-dev
02:08 EinstCrazy joined #gluster-dev
02:10 sakshi joined #gluster-dev
02:12 luizcpg joined #gluster-dev
02:21 gem joined #gluster-dev
02:42 mchangir_ joined #gluster-dev
02:53 lpabon joined #gluster-dev
02:56 luizcpg joined #gluster-dev
02:57 aravindavk joined #gluster-dev
03:28 EinstCrazy joined #gluster-dev
03:34 atinm joined #gluster-dev
03:41 nbalacha joined #gluster-dev
03:46 EinstCra_ joined #gluster-dev
03:54 itisravi joined #gluster-dev
04:00 josferna joined #gluster-dev
04:02 nishanth joined #gluster-dev
04:07 shubhendu joined #gluster-dev
04:29 Saravanakmr joined #gluster-dev
04:30 spalai joined #gluster-dev
04:36 atalur joined #gluster-dev
04:38 gem joined #gluster-dev
04:43 rafi joined #gluster-dev
04:46 overclk joined #gluster-dev
04:51 aspandey joined #gluster-dev
05:05 anoopcs spalai, Can you please take a look at the updated patch set: https://review.gluster.org/#/c/14189/?
05:08 karthik___ joined #gluster-dev
05:09 raghug joined #gluster-dev
05:14 poornimag joined #gluster-dev
05:18 ndarshan joined #gluster-dev
05:19 kotreshhr joined #gluster-dev
05:21 spalai left #gluster-dev
05:23 hgowtham joined #gluster-dev
05:25 Apeksha joined #gluster-dev
05:31 rafi1 joined #gluster-dev
05:34 aravindavk joined #gluster-dev
05:37 jiffin joined #gluster-dev
05:38 vimal joined #gluster-dev
05:42 hchiramm joined #gluster-dev
05:47 Bhaskarakiran joined #gluster-dev
05:50 rafi joined #gluster-dev
05:53 mchangir_ joined #gluster-dev
05:53 asengupt joined #gluster-dev
05:53 pur joined #gluster-dev
05:55 atalur joined #gluster-dev
06:00 spalai joined #gluster-dev
06:05 pkalever joined #gluster-dev
06:08 Manikandan joined #gluster-dev
06:26 kshlm joined #gluster-dev
06:27 skoduri joined #gluster-dev
06:27 rafi josferna++
06:27 glusterbot rafi: josferna's karma is now 3
06:27 josferna rafi, thanks :)
06:30 prasanth joined #gluster-dev
06:30 kdhananjay joined #gluster-dev
06:37 ashiq_ joined #gluster-dev
06:40 poornimag skoduri, ping, if you can give acks to http://review.gluster.org/#/c/12995/ and http://review.gluster.org/#/c/12996/ i can get that merged.
06:40 poornimag skoduri, thanku:)
06:43 kshlm joined #gluster-dev
06:46 Manikandan joined #gluster-dev
06:53 rraja joined #gluster-dev
06:53 mchangir_ joined #gluster-dev
06:57 raghug joined #gluster-dev
06:58 atalur joined #gluster-dev
07:02 nbalacha joined #gluster-dev
07:03 atinm joined #gluster-dev
07:06 skoduri joined #gluster-dev
07:11 pkalever joined #gluster-dev
07:17 skoduri poornimag, sure..will do it by eod today
07:19 rafi joined #gluster-dev
07:19 kshlm joined #gluster-dev
07:25 poornimag skoduri, Thank you
07:36 pranithk1 joined #gluster-dev
07:44 vimal joined #gluster-dev
07:45 mchangir_ joined #gluster-dev
07:46 nbalacha joined #gluster-dev
07:49 atinm joined #gluster-dev
07:55 pranithk1 joined #gluster-dev
08:17 raghug joined #gluster-dev
08:24 skoduri joined #gluster-dev
08:43 rastar joined #gluster-dev
08:47 ndevos anoopcs: I just filed https://github.com/gluster/glusterfs-coreutils/issues/15 , it probably is something you guys want to add to Samba too
08:47 atinm joined #gluster-dev
09:08 aravindavk joined #gluster-dev
09:16 Saravanakmr joined #gluster-dev
09:21 skoduri ndevos, for wrt http://review.gluster.org/#/c/14278, do you agree that we need to set errno in inode_link itself depending on failure case?
09:21 ndevos skoduri: yes
09:22 skoduri ndevos, okay thanks..since it is  common to many components, wanted to confirm..will make changes..
09:23 ndevos skoduri: it is better to also set the errno, returning NULL is fine, but a way to check what went wrong is better
09:23 skoduri ndevos, agree
09:32 opelhoward_ joined #gluster-dev
09:36 atinm joined #gluster-dev
09:37 opelhoward_ Hi, I would like to ask how to read glusterid of a file from terminal. I tried "getafttr -d hello.txt", but it seems nothing comes up.
09:40 bfoster joined #gluster-dev
09:48 opelhoward_ never mind. I have to insert the pattern in getfattr.
09:54 hchiramm anoopcs, https://github.com/gluster/glusterdocs/pull/101
10:02 spalai left #gluster-dev
10:02 spalai joined #gluster-dev
10:03 kkeithley1 joined #gluster-dev
10:15 opelhoward_ It seems the range of a directory/brick, is not saved in the xattr (cmiiw). How could I have the range for specific dir (from terminal)?
10:16 opelhoward_ I am trying to learn about Gluster concept in practice
10:26 kshlm opelhoward_, The range is defined by a the attribute trusted.glusterfs.dht
10:26 kshlm You can read about it a little more here http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
10:31 EinstCrazy joined #gluster-dev
10:34 penguinRaider joined #gluster-dev
10:34 dlambrig_ joined #gluster-dev
10:40 opelhoward_ right, I have just been informed by @atalur. thx @kshlm. it is from 2*8 bytes rightmost bit
10:56 mchangir_ joined #gluster-dev
11:06 pranithk1 joined #gluster-dev
11:13 gem joined #gluster-dev
11:17 mchangir_ joined #gluster-dev
11:19 hgowtham joined #gluster-dev
11:19 hgowtham joined #gluster-dev
11:21 ppai joined #gluster-dev
11:21 penguinRaider joined #gluster-dev
11:35 Debloper joined #gluster-dev
11:37 anoopcs spalai, Can you please re-visit https://review.gluster.org/#/c/14189/?
11:37 kkeithley_ Community Bug Triage meeting in ~20 minutes in #gluster-meeting
11:40 kkeithley_ Reminder: If you file a bug, you can (and you may, and maybe even you should) triage it yourself. If you know who to assign it to (including yourself), do so. Then add 'Triaged' to the keywords.
11:43 hgowtham joined #gluster-dev
11:46 Bhaskarakiran joined #gluster-dev
12:00 kkeithley_ Community Bug Triage meeting _now_, in #gluster-meeting
12:06 aravindavk joined #gluster-dev
12:11 spalai anoopcs: will do by tomorrow
12:11 kotreshhr left #gluster-dev
12:11 anoopcs spalai, Ok. Thanks.
12:14 luizcpg joined #gluster-dev
12:22 opelhoward_ joined #gluster-dev
12:29 shaunm joined #gluster-dev
12:39 nbalacha joined #gluster-dev
12:42 josferna joined #gluster-dev
12:53 kkeithley_ skoduri++
12:53 glusterbot kkeithley_: skoduri's karma is now 22
12:53 kkeithley_ ndevos++
12:53 glusterbot kkeithley_: ndevos's karma is now 261
12:53 kkeithley_ jiffin++
12:53 glusterbot kkeithley_: jiffin's karma is now 38
12:53 rraja joined #gluster-dev
12:54 kkeithley_ post-factum++
12:54 glusterbot kkeithley_: post-factum's karma is now 11
12:54 ndevos kkeithley_++  too
12:54 glusterbot ndevos: kkeithley_'s karma is now 7
12:54 skoduri kkeithley_++
12:54 glusterbot skoduri: kkeithley_'s karma is now 8
12:57 EinstCrazy joined #gluster-dev
12:58 aravindavk joined #gluster-dev
13:01 spalai left #gluster-dev
13:01 shubhendu_ joined #gluster-dev
13:08 nishanth joined #gluster-dev
13:16 ndarshan joined #gluster-dev
13:29 skoduri_ joined #gluster-dev
13:29 jiffin1 joined #gluster-dev
13:30 pkalever left #gluster-dev
13:39 penguinRaider joined #gluster-dev
13:39 josferna joined #gluster-dev
13:41 atinm joined #gluster-dev
13:43 jobewan joined #gluster-dev
13:46 nigelb Hi misc!
13:46 nigelb I have so many questions :)
13:52 misc I can't hear you, I am ... under a tunnel, let me call you back later
13:52 * misc rush to a airplane to a undisclosed location
13:54 nigelb heh
13:54 nigelb Is there a repo with the ansible stuff?
13:54 misc yes !
13:54 misc https://github.com/gluster/gluster.org_ansible_configuration
13:54 misc that's a WIP for now, because we are using salt and I am converting stuff
13:55 misc https://github.com/gluster/gluster.org_salt_states
13:55 * misc see the sync is broken
13:55 nigelb I'm more of an ansible person than salt.
13:55 nigelb I'm glad we'er moving to ansible.
13:56 misc well, that make reuse easier, and indeed, after 1 year, salt didn't got much adoption
13:56 misc (at least, for ansible I got 1 patch, despites being not needed, but that's great)
13:57 nigelb I heard of salt a lot around conferences.
13:57 pranithk1 joined #gluster-dev
13:57 nigelb But most people I knew professionally used puppet/ansible/chef.
13:58 nigelb in some combination.
13:58 nbalacha joined #gluster-dev
13:59 misc salt is nice too, there is a sizable community in Paris, but lots of RH know ansible, so :)
13:59 misc but the current setup still piggyback the salt bus for various reasons
14:00 misc https://www.gluster.org/pipermail/gluster-infra/2016-January/001594.html
14:02 nigelb I agree with that side effect.
14:02 nigelb Where you probably need root on the severs you intend to touch at the very least.
14:05 misc yeah, that's more the idea of having a ssh key that can be stolen I do not like
14:06 misc I was looking at various options like tpm, smartcard, but none that work in the current cloud...
14:08 misc nigelb: also, on the list of thing to do: autoamte jenkins job builder deployment
14:08 misc (someone did start as i found out before being away for the week)
14:09 nigelb https://github.com/ansible/ansible/issues/10065
14:09 nigelb There's this.
14:10 luizcpg joined #gluster-dev
14:10 nigelb misc: our jenkins lives in rackspace or temporary racks in the DC?
14:12 misc nigelb: temporary space in DC
14:12 misc temporary since they are supposed to move last month, but for that, we need IP and for that, we need them to be assigned
14:12 misc so we are waiting on NOC
14:12 misc nigelb: that's for the main instance, for the builder, taht's mostly rackspace, except 2 nodes in the DC
14:13 mchangir joined #gluster-dev
14:14 nigelb we have a jenkins that's different from build.gluster.org?
14:14 nigelb I'm still trying to wrap my head around how things work.
14:14 nigelb And what we run in terms of CI/CD
14:14 josferna joined #gluster-dev
14:14 misc so no, we have build.gluster.org
14:14 misc running in a VM, hosted on formicary.gluster.org
14:15 misc build.gluster.org is a RHEL 6, running latest rpm for jenkins
14:15 misc formicary is EL 7, with libvirt
14:15 misc and dev.gluster.org is running a old version on gerrit on EL5
14:15 nigelb ow.
14:16 misc build and dev have also others names for more fun (like git.gluster.org, bits.gluster;org), but we should keep build and dev :)
14:16 nigelb stupid question - when you say EL7 and EL5, you mean RHEL7 and RHEL5, right?
14:16 nigelb I come from the world of Ubuntu *hides*
14:16 misc nigelb: well, rhel or centos
14:16 misc dev.gluster.org is centos 5.11
14:17 misc build.gluster.org is centos 6.7
14:17 misc and formicary is centos 7.2.1511
14:18 misc dev and build are "historical" VM who were running on iweb before I just moved them to the RH DC
14:18 nigelb ahh.
14:18 misc having them "automated" is on the todo-list
14:18 misc jenkins shouldn't be too hard, it already use rpm
14:18 nigelb what's causing gerrit to fail over often?
14:19 nigelb underprovisioned machine or just plain ol' java?
14:19 misc no idea
14:19 misc we did gave more ram to the server
14:19 misc (IIRC°
14:19 misc there is 6G of ram
14:19 misc and 2G free
14:20 nigelb Does our CI clone off gerrit?
14:20 nigelb My personal theory is we may be DDoSing ourselves.
14:20 misc most likely
14:20 misc mhhh
14:20 nigelb No shred of proof, though.
14:20 misc well, could be
14:20 misc given how sensitive is gerrit, I didn't inspect it
14:20 nigelb the glusterfs repo could use a gc
14:20 nigelb `git gc` rather.
14:21 misc I think I did that last year
14:21 misc but that's like taxes, maybe we need to do it every year
14:21 nigelb There's a setting on gerrit to do it x number of days.
14:21 misc but we could also extend the number of thread now
14:21 nigelb I don't know if that setting works with our version of gerrit.
14:22 nigelb https://gerrit-documentation.storage.googleapis.com/Documentation/2.12.2/config-gerrit.html#gc
14:22 misc our version is a bit older :)
14:23 nigelb yeah, can't find it for docs on our version :(
14:23 misc and no, we do not have the setting in the config
14:23 misc (config not yet in ansible and/or salt)
14:24 nigelb The thing I'm most comfortable jumping into is the ansible roles on your list.
14:25 nigelb I'm familiar with ansible, but not what you'd like to see done.
14:25 nigelb Do we have salt scripts for that or bash scripts that I can base off?
14:28 nigelb misc: is there a historical reason for a whole machine for downloads.gluster.org?
14:28 misc nigelb: yes
14:29 nigelb I figured there might be something like that :)
14:30 misc long story: first part of https://www.youtube.com/watch?v=_MJHa1_JwoA
14:31 nigelb Ouch.
14:31 * nigelb bookmarks for tomorrow.
14:32 misc short version https://www.gluster.org/pipermail/gluster-devel/2015-October/046955.html
14:32 nigelb ah, this I know of. Vaguely.
14:33 misc it is different from http://www.zdnet.com/article/red-hats-ceph-and-inktank-code-repositories-were-cracked/#!
14:33 misc nigelb: but yeah, there is a separate server for security reason
14:33 nigelb no, no. I heard from amye.
14:34 nigelb Part of my question was in terms of, is there any usefulness in thinking of hosting it on Rackspace's Cloud Files?
14:34 nigelb Less need for us to run an entire server just to serve files.
14:34 misc well, depend
14:34 misc we are already spending too much on rackspace
14:34 misc I am not sure what kind of metrics we can have
14:34 nigelb They give us access logs
14:35 misc and I would like to be able to offer https by default
14:35 nigelb Ah.
14:35 nigelb Yes, I would want to as well.
14:35 misc nigelb: yeah, but we need to send the log to a company called bitergia
14:35 nigelb I know, I've been asking around what's keeping us running that server.
14:35 misc and well, I would like the ip address etc to be restricted in term of access,
14:35 misc currently, bitergia give a ssh key, so I can limit by ip and that's encrypted
14:36 nigelb going back to what we can do to begin with.
14:36 nigelb How can I get started on the virt-install role?
14:37 misc mhh
14:37 misc let me look in what state did I left my code, as it was still a bit nebulous when I was working on it, more like draft
14:38 misc mhh, didn't even commit anything :(
14:39 misc nigelb: so you know how virt-install work ?
14:39 nigelb Nope. I'm reading documentation as we speak
14:41 misc so basically, the idea is to just run virt-install if we do not detect the VM : https://paste.fedoraproject.org/364678/89122314/
14:41 misc the complciated stuff is: - templating the kickstart
14:41 misc - writing the proper virt_install_command
14:43 misc http://paste.fedoraproject.org/364682/46289138 default/main.yml as I tried to write it
14:43 nigelb where is this used?
14:43 misc nowhere
14:43 nigelb like, when we have this role, what will we use it for?
14:43 misc oh
14:43 misc to be able to declare a VM to run on the various virthost we have
14:44 misc ie, if I want a new VM, I have to add it to the ansible script of the virt-host
14:44 misc (ie, formicary.gluster;org for now)
14:44 nigelb aha
14:44 nigelb I'll kick around with it tomorrow and see where i go.
14:44 misc so if we want a new builder, just add it to ansible, push and wait
14:45 nigelb (I'm around part-time until end of next week)
14:45 misc ok
14:45 nigelb so, I won't get spectacularly fast results.
14:45 misc part time, in the afternoon ?
14:45 nigelb er, your afternoon, yeah.
14:45 misc so your evening ?
14:45 nigelb ending around this time.
14:45 nigelb yeah.
14:45 nigelb I'll be around from about 4:30 pm my evening, for about 4 to 5 hours.
14:47 misc ok
14:48 overclk joined #gluster-dev
14:50 nigelb misc: Before I sign out though, how can I help with the gerrit situation?
14:50 nigelb where situation is the large dependency chain involved in upgrading it.
14:50 rraja joined #gluster-dev
14:51 misc nigelb: become a java coder :) ?
14:51 nigelb ha LD
14:51 nigelb :D
14:51 nigelb I'd like to go the gerrit folks for help, if we're all up to date.
14:51 misc we are using a old version of gerrit
14:51 misc so I suspect they will tell "upgrade"
14:52 nigelb Yeah.
14:52 misc and they would be right
14:52 misc a ansible playbook that deploy gerrit would help
14:52 misc (gerrit + a postgresql db)
14:52 misc then we can iterate and migrate data
14:52 nigelb I'm guessing onto Centos 7?
14:52 nigelb Or some version of Centos.
14:53 misc yeah
14:53 misc 7, no reason to take less :)
14:53 jiffin joined #gluster-dev
14:54 nigelb misc: do we have monitoring on that machine?
14:54 misc nigelb: depend, if user count as "monitoring", the answer is "yes", if not, the answer is "no"
14:54 nigelb I'm curious to see memory pattern and gerrit uptime alongside our CI activity frequency.
14:54 nigelb heh, so I guess no :)
14:54 misc iirc, we have munin, but I just found out it is broken
14:55 nigelb munin also didn't have that box on it the last I checked.
14:55 misc and the deployment of zabbix was not finished and didn't had time to reuse nagios from others project
14:55 misc so no
14:55 nigelb I'll live.
14:55 nigelb Let's see how I go tomorrow.
14:55 misc I kinda pushed changing the box for the migration, but now that's done, I can start to add stuff there
14:56 misc if I place it under ansible management, munin should be added magically (minus bug)
14:56 nigelb heh, yeah. I had nagios magically adding all hosts at $old_job
14:56 mchangir joined #gluster-dev
14:57 misc my main issue is that I already have that for manageiq.org, but want to avoid duplciating job
14:57 misc but then, I can't do generic stuff for everything :/
14:57 nigelb (this is part of my annoyance with nagios)
14:58 nigelb s/nagios/ansible
14:58 nigelb you can't get perfectly generic things that are readable.
14:58 nigelb at some point it'll be heavily customized.
14:58 * nigelb -> afk
14:58 josferna joined #gluster-dev
15:21 pranithk1 joined #gluster-dev
15:31 wushudoin joined #gluster-dev
15:36 kbyrne joined #gluster-dev
15:41 dlambrig_ left #gluster-dev
15:47 rafi1 joined #gluster-dev
15:48 gem joined #gluster-dev
15:53 skoduri joined #gluster-dev
15:56 rraja joined #gluster-dev
16:42 ashiq_ joined #gluster-dev
16:45 penguinRaider joined #gluster-dev
16:45 dlambrig_ joined #gluster-dev
16:48 rraja joined #gluster-dev
17:01 wushudoin joined #gluster-dev
17:13 shubhendu_ joined #gluster-dev
17:13 mchangir joined #gluster-dev
17:14 shubhendu_ joined #gluster-dev
17:24 hagarth joined #gluster-dev
17:29 ndevos and fyi https://www.arrfab.net/attic/DevCloud-Gluster-IB.jpg
17:30 ndevos those are the four Gluster servers that power CentOS' DevCloud, VMs on OpenNebula for projects and other stuff
17:31 anoopcs Cool.
18:23 gem joined #gluster-dev
18:41 rraja joined #gluster-dev
18:41 hagarth joined #gluster-dev
20:58 shaunm joined #gluster-dev
23:27 luizcpg joined #gluster-dev
23:28 opelhoward_ joined #gluster-dev
23:43 luizcpg joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary