Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-03-13

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
02:29 atm0s joined #gluster-dev
02:33 gyadav joined #gluster-dev
02:50 atm0s joined #gluster-dev
03:26 apandey joined #gluster-dev
03:26 prasanth joined #gluster-dev
03:30 atinm_ joined #gluster-dev
03:39 vimal joined #gluster-dev
03:45 nbalacha joined #gluster-dev
04:02 ashiq joined #gluster-dev
04:12 gyadav joined #gluster-dev
04:22 Shu6h3ndu joined #gluster-dev
04:24 ashiq joined #gluster-dev
04:40 itisravi joined #gluster-dev
04:41 kdhananjay joined #gluster-dev
04:44 vimal joined #gluster-dev
04:45 jiffin joined #gluster-dev
04:46 apandey joined #gluster-dev
04:55 karthik_us joined #gluster-dev
05:00 ankitr joined #gluster-dev
05:05 asengupt joined #gluster-dev
05:18 ndarshan joined #gluster-dev
05:19 apandey joined #gluster-dev
05:24 apandey joined #gluster-dev
05:30 sanoj joined #gluster-dev
05:34 mchangir joined #gluster-dev
05:38 ShwethaHP joined #gluster-dev
05:39 riyas joined #gluster-dev
05:40 rafi joined #gluster-dev
05:54 Saravanakmr joined #gluster-dev
05:55 ankitr joined #gluster-dev
06:03 susant joined #gluster-dev
06:06 rafi1 joined #gluster-dev
06:13 jiffin joined #gluster-dev
06:14 pkalever joined #gluster-dev
06:22 susant joined #gluster-dev
06:35 atinmu joined #gluster-dev
06:48 ashiq joined #gluster-dev
06:51 karthik_us joined #gluster-dev
07:03 rastar joined #gluster-dev
07:07 karthik_us joined #gluster-dev
07:09 pkalever left #gluster-dev
07:46 pkalever joined #gluster-dev
07:46 rafi1 joined #gluster-dev
08:15 Humble joined #gluster-dev
08:19 atinm joined #gluster-dev
08:26 susant rastar++
08:26 glusterbot susant: rastar's karma is now 48
08:26 susant left #gluster-dev
08:26 susant joined #gluster-dev
08:35 prasanth joined #gluster-dev
08:35 ankitr joined #gluster-dev
08:41 aravindavk joined #gluster-dev
09:00 ankitr joined #gluster-dev
09:04 ankitr joined #gluster-dev
09:08 rraja joined #gluster-dev
09:51 ashiq joined #gluster-dev
10:06 glusterbot` joined #gluster-dev
10:06 glusterbot` joined #gluster-dev
10:06 glusterbot` joined #gluster-dev
10:06 glusterbot` joined #gluster-dev
10:06 glusterbot` joined #gluster-dev
10:07 glusterbot` joined #gluster-dev
10:07 glusterbot` joined #gluster-dev
10:07 glusterbot` joined #gluster-dev
10:07 glusterbot` joined #gluster-dev
10:07 glusterbot` joined #gluster-dev
10:08 glusterbot` joined #gluster-dev
10:08 17SAAJVPY joined #gluster-dev
10:08 gl7sterbot joined #gluster-dev
10:08 glusterbot` joined #gluster-dev
10:08 glusterbot` joined #gluster-dev
10:08 glusterbot` joined #gluster-dev
10:08 94KAAFCP7 joined #gluster-dev
10:09 glusterbot` joined #gluster-dev
10:09 glusterbot` joined #gluster-dev
10:09 glusterbot joined #gluster-dev
10:11 glusterbot` joined #gluster-dev
10:11 glusterbot joined #gluster-dev
10:12 kdhananjay xavih_++
10:12 glusterbot` kdhananjay: xavih_'s karma is now 2
10:12 kdhananjay xavih++
10:12 glusterbot` kdhananjay: xavih's karma is now 28
10:12 glusterbot joined #gluster-dev
10:13 xavih kdhananjay++
10:13 glusterbot` xavih: kdhananjay's karma is now 27
10:13 glusterbot joined #gluster-dev
10:15 glusterbot joined #gluster-dev
10:17 glusterbot joined #gluster-dev
10:19 glusterbot joined #gluster-dev
10:21 glusterbot joined #gluster-dev
10:23 glusterbot joined #gluster-dev
10:25 glusterbot joined #gluster-dev
10:25 msvbhat joined #gluster-dev
10:27 glusterbot joined #gluster-dev
10:28 glusterbot joined #gluster-dev
10:30 glusterbot joined #gluster-dev
10:31 sanoj joined #gluster-dev
10:31 glusterbot joined #gluster-dev
10:33 glusterbot joined #gluster-dev
10:33 gyadav joined #gluster-dev
10:35 glusterbot joined #gluster-dev
10:37 glusterbot joined #gluster-dev
10:38 glusterbot joined #gluster-dev
10:40 glusterbot joined #gluster-dev
10:41 glusterbot joined #gluster-dev
10:43 glusterbot joined #gluster-dev
10:45 glusterbot joined #gluster-dev
10:46 kkeithley well, the glusterfs-3.10.0rc0 tarball certain is still on download.gluster.org, (in https://download.gluster.org/pub/​gluster/glusterfs/3.10/3.10.0rc0/ fwiw) but the sha256sum file for 3.10.0 GA was not copied correctly from bits.gluster.org (my fault).   I've fixed that now.
10:47 glusterbot joined #gluster-dev
10:49 glusterbot joined #gluster-dev
10:51 glusterbot joined #gluster-dev
10:52 bfoster joined #gluster-dev
10:53 nigelb JoeJulian: i think glusterbot is having some issues?
10:53 glusterbot joined #gluster-dev
10:55 glusterbot joined #gluster-dev
10:57 glusterbot joined #gluster-dev
10:58 glusterbot joined #gluster-dev
11:01 glusterbot joined #gluster-dev
11:03 glusterbot joined #gluster-dev
11:04 ndevos pkalever: is there a reason why rpcgen is not run during building gluster-block? care if I delete the generated files and add the generation in a Makefile?
11:04 glusterbot joined #gluster-dev
11:07 glusterbot joined #gluster-dev
11:08 glusterbot joined #gluster-dev
11:08 pkalever ndevos: as of now we are not generating the code via block.x, the idea is to use rpcgen in the next release
11:09 ndevos pkalever: well, the files have a header that mentions 'do not edit, code generated by rpcgen' already
11:09 glusterbot joined #gluster-dev
11:10 ndevos pkalever: its just that rpcgen is not used to generate the files during build
11:10 pkalever ndevos: yes! that is not honored.
11:11 ndevos pkalever: so, if I get you a patch that generates the files, that is acceptible?
11:11 glusterbot joined #gluster-dev
11:11 pkalever ndevos: if you have the changes locally to run rpcgen in the makefile, please feel free to drop them
11:11 pkalever ndevos: absolutely. you are welcome :-)
11:12 ndevos pkalever: yeah, doing that now, I want to be able to pass a Gluster hostname on the create command, so that "localhost" is not required
11:12 glusterbot joined #gluster-dev
11:12 ashiq joined #gluster-dev
11:12 ndevos pkalever: and that requires a change in the cli <-> daemon communication
11:12 * ndevos was wondering why the change in the .x was not picked up
11:13 pkalever ndevos: I think I have a similar patch will drop that in a while, see if we are not intending to do the same
11:13 glusterbot joined #gluster-dev
11:14 ndevos pkalever: oh, that would be nice - what is "a while"?
11:14 pkalever ndevos: during the initial block rpc dev time, we have clubbed many thing in the block.h file, thought that is wrong.
11:14 glusterbot joined #gluster-dev
11:14 pkalever ndevos: mean while decoupled them into block_svc_routine.c
11:14 pkalever ndevos: in another 10 mins
11:15 ndevos pkalever: ok, nice!
11:15 glusterbot joined #gluster-dev
11:16 glusterbot joined #gluster-dev
11:18 glusterbot joined #gluster-dev
11:20 pkalever ndevos: dropped the change!
11:20 glusterbot joined #gluster-dev
11:21 glusterbot joined #gluster-dev
11:23 glusterbot joined #gluster-dev
11:25 glusterbot joined #gluster-dev
11:25 ndevos pkalever: I think we're talking about different things...
11:26 glusterbot joined #gluster-dev
11:26 ndevos I would like to do: gluster-block create <SOME_GLUSTER_SERVER>/block-test/sample-block
11:26 ndevos where SOME_GLUSTER_SERVER is no "localhost"
11:26 pkalever ndevos: that was intentional.
11:26 pkalever ndevos: IMO
11:27 skoduri joined #gluster-dev
11:27 glusterbot joined #gluster-dev
11:27 ndevos pkalever: well, I'm sure it was not a mistake, but it reduces the flexibility a lot
11:28 ndevos pkalever: I wanted to try it out, and setting up a seperate VM for the service doing the exporting is my normal way, it makes it easier to scratch the setup and start again
11:28 glusterbot joined #gluster-dev
11:29 pkalever ndevos: do you mean to separate the gluster nodes and tcmu-runner hosts ?
11:29 glusterbot joined #gluster-dev
11:30 ndevos pkalever: yes, that is something many users tend to do, not touch the gluster storage servers, only add separate servers for accessing, similar to proxy/gateway/..
11:30 bfoster joined #gluster-dev
11:30 glusterbot joined #gluster-dev
11:31 Shu6h3ndu joined #gluster-dev
11:32 glusterbot joined #gluster-dev
11:33 glusterbot joined #gluster-dev
11:33 pkalever ndevos: If I remember it right, vbellur and pkarampu said NACK to that, to maintain the simplicity. The initial design that I have carved-out came in the a way that storage and iscsi can reside on different setups. At one point I have diverged the code remove this changes.
11:34 glusterbot joined #gluster-dev
11:35 pkalever ndevos: The idea was to keep the setup simple. I agree that users want to keep this in different spaces. Please initiate a thread on this, I thing its much better if we fill up the details their ?
11:35 pkalever ndevos: If you remember, we has --hosts and --block-hosts separate ?
11:35 glusterbot joined #gluster-dev
11:35 ndevos pkalever: well, I intend to have "localhost" be the default, but just make it possible to use an other gluster storage server and volume
11:35 pkalever ndevos: /s/has/had
11:36 ndevos pkalever: yeah, I thought it was in the design, but I've not seen the conversation to remove it
11:36 glusterbot_ joined #gluster-dev
11:38 glusterbot joined #gluster-dev
11:38 nishanth joined #gluster-dev
11:39 glusterbot joined #gluster-dev
11:39 pkalever ndevos: sorry about that, that was a quick decision made with in the block team, apologies for not making that available public. Please initiate a mail thread with this thread. i think vbellur will also be able to share his thoughts there.
11:39 pkalever s/this thread/this sub/
11:40 ndevos pkalever: I'll just send a patch - and please make sure future conversations are on gluster-devel or other public lists where others can follow things
11:40 glusterbot joined #gluster-dev
11:41 glusterbot joined #gluster-dev
11:41 pkalever ndevos: sure. As the initial release happened already. We are now practicing to keep the info available on gluster-devel list
11:42 ndevos pkalever: ok, thanks
11:42 glusterbot joined #gluster-dev
11:44 glusterbot joined #gluster-dev
11:45 * pkalever still feel its good if ndevos initiate an email, to see if other maintainers agree on the change ?
11:45 glusterbot joined #gluster-dev
11:47 glusterbot joined #gluster-dev
11:47 ndevos pkalever: its something that is expected to work for basically any gfapi application, if there are objectiont, I'll see them in the patch review ;)
11:47 ankitr joined #gluster-dev
11:48 ndevos its not like it is a major feature, it is more a missing standard usage kind of thing
11:49 pkalever ndevos: cool. see you there ;)
11:49 glusterbot joined #gluster-dev
11:51 glusterbot joined #gluster-dev
11:53 glusterbot joined #gluster-dev
11:55 glusterbot joined #gluster-dev
11:57 glusterbot joined #gluster-dev
11:59 glusterbot joined #gluster-dev
12:01 glusterbot joined #gluster-dev
12:03 glusterbot joined #gluster-dev
12:05 glusterbot joined #gluster-dev
12:07 glusterbot joined #gluster-dev
12:09 glusterbot joined #gluster-dev
12:10 glusterbot joined #gluster-dev
12:11 glusterbot joined #gluster-dev
12:13 glusterbot joined #gluster-dev
12:15 glusterbot joined #gluster-dev
12:17 glusterbot joined #gluster-dev
12:19 glusterbot joined #gluster-dev
12:21 glusterbot joined #gluster-dev
12:23 glusterbot joined #gluster-dev
12:25 atinm asengupt, https://bugzilla.redhat.co​m/show_bug.cgi?id=1322145
12:25 glusterbot Bug 1322145: high, unspecified, ---, amukherj, CLOSED EOL, Glusterd fails to restart after replacing a failed GlusterFS node and a volume has a snapshot
12:25 glusterbot` Bug 1322145: high, unspecified, ---, amukherj, CLOSED EOL, Glusterd fails to restart after replacing a failed GlusterFS node and a volume has a snapshot
12:25 glusterbot` Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1322145 high, unspecified, ---, amukherj, CLOSED EOL, Glusterd fails to restart after replacing a failed GlusterFS node and a volume has a snapshot
12:25 glusterbot` glusterbot: -'s karma is now -8
12:25 glusterbot joined #gluster-dev
12:25 atinm asengupt, I've a follow up needinfo, please check
12:27 glusterbot joined #gluster-dev
12:28 glusterbot joined #gluster-dev
12:29 glusterbot joined #gluster-dev
12:30 ashiq joined #gluster-dev
12:32 glusterbot joined #gluster-dev
12:34 glusterbot joined #gluster-dev
12:34 ndevos pkalever: are bugs for gluster-block reported as a GitHub issue?
12:34 ndevos pkalever: also, https://github.com/gluster/gluster-block/pull/1
12:35 glusterbot joined #gluster-dev
12:36 * pkalever looking at the links ndevos provided
12:37 glusterbot joined #gluster-dev
12:37 glusterbot joined #gluster-dev
12:39 pkalever ndevos: yes you can create an issue
12:39 glusterbot joined #gluster-dev
12:39 ndevos pkalever: already did :) https://github.com/gluster/gluster-block/issues/2
12:39 pkalever ndevos: very fast :-)
12:41 glusterbot joined #gluster-dev
12:43 glusterbot joined #gluster-dev
12:45 glusterbot joined #gluster-dev
12:47 glusterbot joined #gluster-dev
12:48 glusterbot joined #gluster-dev
12:49 glusterbot joined #gluster-dev
12:51 glusterbot joined #gluster-dev
12:53 glusterbot joined #gluster-dev
12:56 glusterbot joined #gluster-dev
12:57 glusterbot joined #gluster-dev
13:00 glusterbot joined #gluster-dev
13:02 glusterbot joined #gluster-dev
13:03 rafi just wondering if it is possible to register as a default reviewer for a tree in source code , any idea ?
13:03 glusterbot joined #gluster-dev
13:03 rafi ndevos, nigelb: ^
13:05 glusterbot joined #gluster-dev
13:06 ndevos rafi: not that I know, but we encourge users to setup their own notifications in any case
13:06 ndevos rafi: see 'patches in gerrit' on http://gluster.readthedocs.io/en/latest/Con​tributors-Guide/Guidelines-For-Maintainers/
13:06 ndevos rafi: I use that to get emails about patches that affect certain files
13:06 glusterbot joined #gluster-dev
13:07 aravindavk joined #gluster-dev
13:08 glusterbot joined #gluster-dev
13:08 rafi ndevos: that also works , but when a new developer submit a patch, how will he/she will know whom to add. I understand that they can look into maintainers file and add them
13:09 glusterbot joined #gluster-dev
13:09 ndevos rafi: ideally nobody needs to be added, if all developers working on a component have setup their notifications
13:10 rafi ndevos: Do you think a default reviewer kind of things for a file/directory make more sense ;)
13:10 ndevos rafi: I think nigelb was looking into that, by having a Jenkins job check the modified files, and add the maintainers from the MAINTAINERS file
13:11 rafi ndevos: coool, just asked , thanks for the reply
13:11 rafi ndevos++
13:11 glusterbot` rafi: ndevos's karma is now 338
13:11 glusterbot joined #gluster-dev
13:12 ndevos rafi: it is something that comes up every now and then... not sure what the current status of this is
13:12 rafi ndevos: :)
13:12 glusterbot joined #gluster-dev
13:13 ndevos rafi: https://bugzilla.redhat.co​m/show_bug.cgi?id=1350477 seems to be the task in question
13:13 glusterbot` Bug 1350477: unspecified, unspecified, ---, nigelb, NEW , Test to check if the maintainer reviewed the patch
13:13 glusterbot Bug 1350477: unspecified, unspecified, ---, nigelb, NEW , Test to check if the maintainer reviewed the patch
13:13 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1350477 unspecified, unspecified, ---, nigelb, NEW , Test to check if the maintainer reviewed the patch
13:13 glusterbot` Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1350477 unspecified, unspecified, ---, nigelb, NEW , Test to check if the maintainer reviewed the patch
13:13 glusterbot` glusterbot: -'s karma is now -10
13:13 glusterbot glusterbot`: -'s karma is now -9
13:13 glusterbot Bug 1350477: unspecified, unspecified, ---, nigelb, NEW , Test to check if the maintainer reviewed the patch
13:13 glusterbot` Bug 1350477: unspecified, unspecified, ---, nigelb, NEW , Test to check if the maintainer reviewed the patch
13:13 glusterbot` glusterbot: -'s karma is now -11
13:13 glusterbot glusterbot`: -'s karma is now -12
13:13 glusterbot glusterbot`: -'s karma is now -14
13:13 glusterbot` glusterbot: -'s karma is now -13
13:13 glusterbot joined #gluster-dev
13:14 ndevos oh, wow, many glusterbots!
13:14 rafi glusterbot:  seems like you also have the frequent connect and disconnect issye==ue
13:15 rafi ndevos: it is just disconnect and connect, right ?
13:15 glusterbot joined #gluster-dev
13:15 ndevos rafi: yes, but sometimes there are two, and they talk to eachother
13:16 glusterbot joined #gluster-dev
13:16 ndevos I wonder if it crashes when there is something like
13:17 rafi ndevos: wow, talking each other is always helpful or some times ;)
13:17 glusterbot joined #gluster-dev
13:17 ndevos ... @tell later glusterbot` @tell later glusterbot do you crash
13:18 ndevos but I'm not sure about the "@tell later" syntax, and I dont want to cause too much mess
13:18 ndevos rafi: helpful, and maybe glusterbot does not feel lonely anymore
13:18 glusterbot joined #gluster-dev
13:19 rafi ndevos: true, but seems like they have an issue now,
13:19 glusterbot joined #gluster-dev
13:20 ndevos rafi: yeah, the whole day already :-/
13:20 ndevos we just have to wait until JoeJulian wakes up
13:20 glusterbot joined #gluster-dev
13:21 rafi ndevos: conciliator
13:21 glusterbot_ joined #gluster-dev
13:22 glusterbot joined #gluster-dev
13:24 glusterbot joined #gluster-dev
13:25 glusterbot joined #gluster-dev
13:26 glusterbot joined #gluster-dev
13:26 rafi1 joined #gluster-dev
13:27 glusterbot joined #gluster-dev
13:28 skoduri joined #gluster-dev
13:28 glusterbot joined #gluster-dev
13:29 shyam joined #gluster-dev
13:29 glusterbot joined #gluster-dev
13:31 glusterbot joined #gluster-dev
13:32 glusterbot joined #gluster-dev
13:33 glusterbot joined #gluster-dev
13:35 glusterbot joined #gluster-dev
13:36 glusterbot joined #gluster-dev
13:37 glusterbot joined #gluster-dev
13:38 glusterbot joined #gluster-dev
13:39 glusterbot joined #gluster-dev
13:40 glusterbot joined #gluster-dev
13:41 glusterbot joined #gluster-dev
13:42 rafi joined #gluster-dev
13:42 glusterbot joined #gluster-dev
13:44 glusterbot joined #gluster-dev
13:45 glusterbot joined #gluster-dev
13:46 glusterbot joined #gluster-dev
13:47 glusterbot joined #gluster-dev
13:48 glusterbot joined #gluster-dev
13:50 Shu6h3ndu joined #gluster-dev
13:50 glusterbot joined #gluster-dev
13:52 glusterbot joined #gluster-dev
13:53 glusterbot joined #gluster-dev
13:54 glusterbot joined #gluster-dev
13:54 Shu6h3ndu_ joined #gluster-dev
13:56 glusterbot joined #gluster-dev
13:56 nbalacha joined #gluster-dev
13:58 glusterbot joined #gluster-dev
13:59 glusterbot joined #gluster-dev
14:00 glusterbot joined #gluster-dev
14:01 glusterbot joined #gluster-dev
14:02 glusterbot joined #gluster-dev
14:04 glusterbot joined #gluster-dev
14:04 sanoj joined #gluster-dev
14:05 glusterbot joined #gluster-dev
14:06 glusterbot joined #gluster-dev
14:07 Humble joined #gluster-dev
14:07 glusterbot joined #gluster-dev
14:08 glusterbot joined #gluster-dev
14:09 glusterbot joined #gluster-dev
14:10 glusterbot joined #gluster-dev
14:12 glusterbot joined #gluster-dev
14:13 glusterbot joined #gluster-dev
14:14 glusterbot joined #gluster-dev
14:15 glusterbot joined #gluster-dev
14:16 glusterbot joined #gluster-dev
14:17 glusterbot joined #gluster-dev
14:18 glusterbot joined #gluster-dev
14:20 glusterbot joined #gluster-dev
14:21 glusterbot joined #gluster-dev
14:22 glusterbot joined #gluster-dev
14:23 glusterbot joined #gluster-dev
14:25 glusterbot joined #gluster-dev
14:26 glusterbot joined #gluster-dev
14:27 glusterbot joined #gluster-dev
14:28 glusterbot_ joined #gluster-dev
14:29 glusterbot joined #gluster-dev
14:30 glusterbot joined #gluster-dev
14:31 glusterbot joined #gluster-dev
14:33 glusterbot joined #gluster-dev
14:34 glusterbot joined #gluster-dev
14:35 susant left #gluster-dev
14:35 glusterbot joined #gluster-dev
14:36 glusterbot joined #gluster-dev
14:38 glusterbot joined #gluster-dev
14:39 glusterbot joined #gluster-dev
14:42 glusterbot joined #gluster-dev
14:44 glusterbot joined #gluster-dev
14:46 glusterbot joined #gluster-dev
14:48 glusterbot joined #gluster-dev
14:49 glusterbot joined #gluster-dev
14:52 glusterbot_ joined #gluster-dev
14:53 shyam joined #gluster-dev
14:54 glusterbot joined #gluster-dev
14:56 glusterbot joined #gluster-dev
14:57 vbellur joined #gluster-dev
14:57 glusterbot joined #gluster-dev
14:59 glusterbot joined #gluster-dev
14:59 ira joined #gluster-dev
15:01 glusterbot joined #gluster-dev
15:04 glusterbot joined #gluster-dev
15:06 gyadav joined #gluster-dev
15:06 glusterbot joined #gluster-dev
15:08 glusterbot joined #gluster-dev
15:09 glusterbot joined #gluster-dev
15:10 glusterbot joined #gluster-dev
15:10 wushudoin joined #gluster-dev
15:12 glusterbot joined #gluster-dev
15:14 glusterbot joined #gluster-dev
15:16 glusterbot joined #gluster-dev
15:18 glusterbot joined #gluster-dev
15:20 glusterbot joined #gluster-dev
15:21 glusterbot joined #gluster-dev
15:24 glusterbot joined #gluster-dev
15:26 glusterbot joined #gluster-dev
15:28 gyadav joined #gluster-dev
15:28 glusterbot joined #gluster-dev
15:29 glusterbot joined #gluster-dev
15:30 glusterbot joined #gluster-dev
15:31 glusterbot joined #gluster-dev
15:34 glusterbot joined #gluster-dev
15:36 glusterbot joined #gluster-dev
15:38 glusterbot joined #gluster-dev
15:39 ankitr joined #gluster-dev
15:40 glusterbot joined #gluster-dev
15:42 glusterbot_ joined #gluster-dev
15:44 glusterbot joined #gluster-dev
15:44 shyam joined #gluster-dev
15:46 glusterbot joined #gluster-dev
15:47 glusterbot joined #gluster-dev
15:49 glusterbot joined #gluster-dev
15:50 glusterbot joined #gluster-dev
15:52 glusterbot joined #gluster-dev
15:54 glusterbot joined #gluster-dev
15:55 glusterbot joined #gluster-dev
15:56 glusterbot joined #gluster-dev
15:57 glusterbot joined #gluster-dev
16:00 glusterbot joined #gluster-dev
16:01 glusterbot joined #gluster-dev
16:04 glusterbot joined #gluster-dev
16:06 glusterbot joined #gluster-dev
16:08 glusterbot joined #gluster-dev
16:10 glusterbot joined #gluster-dev
16:12 glusterbot joined #gluster-dev
16:14 glusterbot joined #gluster-dev
16:15 aravindavk joined #gluster-dev
16:15 glusterbot joined #gluster-dev
16:16 shyam joined #gluster-dev
16:16 glusterbot joined #gluster-dev
16:18 glusterbot joined #gluster-dev
16:20 glusterbot joined #gluster-dev
16:22 glusterbot joined #gluster-dev
16:23 glusterbot joined #gluster-dev
16:24 glusterbot joined #gluster-dev
16:26 glusterbot joined #gluster-dev
16:28 glusterbot joined #gluster-dev
16:30 glusterbot joined #gluster-dev
16:31 glusterbot joined #gluster-dev
16:33 glusterbot joined #gluster-dev
16:33 gyadav joined #gluster-dev
16:34 glusterbot joined #gluster-dev
16:36 glusterbot joined #gluster-dev
16:37 glusterbot joined #gluster-dev
16:39 glusterbot joined #gluster-dev
16:39 msvbhat joined #gluster-dev
16:40 glusterbot joined #gluster-dev
16:42 glusterbot joined #gluster-dev
16:43 glusterbot joined #gluster-dev
16:44 glusterbot joined #gluster-dev
16:46 glusterbot joined #gluster-dev
16:48 jiffin joined #gluster-dev
16:48 glusterbot joined #gluster-dev
16:49 msvbhat joined #gluster-dev
16:49 glusterbot joined #gluster-dev
16:51 glusterbot joined #gluster-dev
16:54 glusterbot joined #gluster-dev
16:55 shyam joined #gluster-dev
16:56 glusterbot joined #gluster-dev
16:58 glusterbot joined #gluster-dev
16:59 ankitr joined #gluster-dev
17:00 glusterbot joined #gluster-dev
17:02 glusterbot joined #gluster-dev
17:04 glusterbot_ joined #gluster-dev
17:07 glusterbot joined #gluster-dev
17:08 glusterbot joined #gluster-dev
17:10 glusterbot joined #gluster-dev
17:12 glusterbot joined #gluster-dev
17:13 glusterbot joined #gluster-dev
17:14 glusterbot joined #gluster-dev
17:16 glusterbot joined #gluster-dev
17:18 glusterbot joined #gluster-dev
17:20 glusterbot joined #gluster-dev
17:22 glusterbot joined #gluster-dev
17:25 glusterbot joined #gluster-dev
17:27 glusterbot joined #gluster-dev
17:29 glusterbot joined #gluster-dev
17:30 glusterbot joined #gluster-dev
17:33 glusterbot joined #gluster-dev
17:35 glusterbot joined #gluster-dev
17:36 samikshan joined #gluster-dev
17:36 glusterbot_ joined #gluster-dev
17:38 glusterbot joined #gluster-dev
17:40 rafi joined #gluster-dev
17:40 glusterbot joined #gluster-dev
17:43 glusterbot joined #gluster-dev
17:45 glusterbot joined #gluster-dev
17:47 glusterbot joined #gluster-dev
17:48 rastar joined #gluster-dev
17:48 glusterbot joined #gluster-dev
17:51 glusterbot joined #gluster-dev
17:52 glusterbot joined #gluster-dev
17:53 glusterbot joined #gluster-dev
17:55 glusterbot joined #gluster-dev
17:57 glusterbot joined #gluster-dev
17:59 glusterbot joined #gluster-dev
18:01 glusterbot joined #gluster-dev
18:03 glusterbot joined #gluster-dev
18:05 glusterbot joined #gluster-dev
18:07 glusterbot joined #gluster-dev
18:09 glusterbot joined #gluster-dev
18:10 glusterbot_ joined #gluster-dev
18:12 glusterbot joined #gluster-dev
18:13 glusterbot joined #gluster-dev
18:14 glusterbot joined #gluster-dev
18:15 major really feel that the core of the glusterd_(btrfs|lvm)_snapshot_device() should get moved into glusterd_(btrfs|lvm)_snapshot_create()
18:15 rastar joined #gluster-dev
18:15 glusterbot joined #gluster-dev
18:16 major would bring the code down to 3 functions for supporting a given filesystem
18:16 major and .. make the *_create() side a lot more sane
18:16 glusterbot joined #gluster-dev
18:17 major 4 functions, glusterd_is_(lvm|btrfs)_brick, glusterd_(lvm|btrfs)_snapshot_create, glusterd_(lvm|btrfs)_snapshot_delete, and glusterd_(lvm|btrfs)_snapshot_details
18:18 glusterbot joined #gluster-dev
18:18 rafi joined #gluster-dev
18:19 glusterbot joined #gluster-dev
18:19 major right now my btrfs code abuses brickinfo->device_name as returned by glusterd_btrfs_snapshot_device() as a method of passing the target btrfs subvol name, since there isn't another way to get that information moved around .. and .. ironically .. just deleting the *_snapshot_device() function, moving the code, and changing the arguments to the *_snapshot_create() would fix it all
18:20 major erm .. brickinfo->device_path even
18:20 major btrfs doesn't even need that particular function .. can make the LVM version static and call it from w/in the _create()
18:21 glusterbot joined #gluster-dev
18:23 glusterbot joined #gluster-dev
18:25 glusterbot joined #gluster-dev
18:37 major I suppose at this point I am just hoping to use my existing code to understand what is trully "common" between zfs, btrfs, and lvm in regards to supporting snapshots and stuffing the target-specific stuff into their respective glusterd-<type>-snapshot.[ch] files
18:38 major as opposed to trying to crowbar btrfs to do it the LVM way .. though .. for at least part of this .. I had to borrow an idea from debian/ubuntu for treating btrfs as if it had volume groups
18:39 major which reminds me ..
18:39 glusterbot joined #gluster-dev
18:39 major anyone have any pointers for adding per-brick dictionary keys for configuration purposes?
18:40 major I need to add a btrfs.subvol-prefix="" option
18:40 rastar joined #gluster-dev
18:42 vbellur major: have you looked into glusterd_brickinfo_t ?
18:43 vbellur looks like you have :)
18:50 major go go pbuilder
18:51 major while that is building..
18:53 major and am fairly tickled really .. this weekend I think I had 2 or 3 segv's, several failed snapshot attempts, restarted the various servers I dunno how many times ... easily in the dozens .. and my test volume is still solid
18:55 raghu joined #gluster-dev
18:56 raghu vbellur: ping
18:58 vbellur raghu: pong, glad to have you in here
18:58 raghu vbellur: :)
18:59 vbellur major: raghu is the (new) maintainer for snapshots. He can offer help with all your questions related to snapshots.
19:00 major cool!
19:01 raghu major: Hello. :)
19:02 major luser@node0:~$ sudo gluster snapshot create testvol0-2017031212 testvol0
19:02 major snapshot create: success: Snap testvol0-2017031212_GMT-2017.03.13-19.02.06 created successfully
19:02 major luser@node0:~$ grep 'gluster/snap' /proc/mounts
19:02 major /dev/sda1 /run/gluster/snaps/42f2d92260​3947ad8db4edcd01b84210/brick3 btrfs rw,relatime,space_cache,subvolid=297,sub​vol=/@42f2d922603947ad8db4edcd01b84210_0 0 0
19:02 major luser@node0:~$ sudo gluster snapshot delete testvol0-2017031212_GMT-2017.03.13-19.02.06
19:02 major Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
19:02 major snapshot delete: testvol0-2017031212_GMT-2017.03.13-19.02.06: snap removed successfully
19:03 major bleh.. my damn temp directories are still in /var/run/gluster/btrfs/ .. grrr..
19:05 raghu major: You mean, despite deleting the snapshot, there is still some information related to it present?
19:05 major no .. the snapshot code is fine .. it is my code that is the issue ..
19:06 major my cleanup code in particular
19:06 major https://github.com/major0/gl​usterfs/tree/btrfs-snapshots
19:07 raghu major: Ahh, OK. Let me check
19:07 major well .. that topic branch uses this branch as its upstream: https://github.com/major0/glust​erfs/tree/lvm-snapshot-cleanup
19:08 major and that branch is based on a repository I found which had a patch for LVM+ZFS by Sriram Raghunathan
19:08 major I just cleaned it up and split out the LVM vs the ZFS code into different branches .. ultimately the LVM portion was just moving LVM functions into their own source file ..
19:08 major I sort of left the ZFS code in its own branch .. getting stale atm
19:11 major anyway .. I am currently borrowing an idea from Debian/Ubuntu in which they create subvolumes for all filesystems that exist on the btrfs filesystem .. so in order to manage subvolumes my btrfs code temporarily mounts the brick->device_path w/out any mount options onto /var/run/gluster/btrfs/<snapname>_<brickcount>/ and then manages subvolumes w/in there as
19:11 major /var/run/btrfs/<snapname>_<brick​count>/@<snapname>_<brickcount>
19:12 major generally that has allowed for a fairly generic translation of the LVM code to the btrfs code .. except for one function that I hope to just make static inside the LVM tree as it isn't needed for btrfs, nor is it going to be needed for ZFS
19:12 major anyway .. my managing of temp mount points in /var/run/btrfs/ is working .. but my cleanup of these mount points is not .. umount is fine .. but my recursive_rmdir() if the umount succeeds seems to be having issues
19:13 major will figure it out
19:14 major while my 2 topic branches are generally anchored off of upstream/master, I have an integration branch which I use for pbuilder on Ubuntu (xenial)
19:16 raghu major: Hmm. OK. "https://github.com/major0/gl​usterfs/tree/btrfs-snapshots" is the code base you are currently using to try the snapshots with btrfs?
19:17 major well .. I rebase that onto a v3.10.0 branch to spit out a patch for building Ubuntu packages ..
19:17 major but .. yes
19:18 major all my code for btrfs goes there, and my LVM code goes into the lvm-snapshot-cleanup branch
19:20 raghu major: OK. Is "lvm-snapshot-cleanup" branch rebased (or going to be rebased) out of v3.10.0?
19:20 major basically the btrfs-snapshot branches parent is the lvm-snapshot-cleanup branch, whos parent is origin/master, who's parent is upstream/master :)
19:20 raghu major: Got it :)
19:21 major I have a rebased version of the btrfs-snapshot branch against v3.10.0 which includes changes from lvm-snapshot-cleanup .. but I haven't pushed it .. just been using it as an integration branch
19:23 raghu major: Hmm. I think I will look at upstream/master based btrfs-snapshot branch first.
19:23 major cool
19:23 raghu major: I will go through the changes. Its good to have btrfs snapshot changes in the code base. :)
19:23 major right now I am trying to fix cleaning up my temp cruft and then I was going to de-dup the btrfs code a bit
19:25 major well .. there is a HUGE section of glusterd_btrfs_snapshot_remove() that is almsot 100% copy/paste from glusterd_lvm_snapshot_remove() .. was debating making a new function off in glusterd-snapshot-utils.c or some such and moving that code there
19:25 major and managing the btrfs specific temp dirs needs to be split out of the create/remove functions and made static as it is all duplicated between them both ..
19:25 major was mostly just trying to get stuff working
19:26 raghu major: I think better to have glusterd_btrfs_snapshot_remove and glusterd_lvm_snapshot_remove functions separate.
19:27 major I agree
19:27 major just saying there is a huge stack of code between them that is identical and can be easily made into a utility function
19:27 major not entirely certain why it was in the LVM side really ..
19:28 major its related to managing the daemonic
19:28 major damon*
19:33 raghu major: Hmm. I agree. Initially gluster's snapshots had only LVM snapshots. So I guess that was the reason why it was in the lvm side. But if some functions of lvm and btrfs snapshots are identical and can be made into a utility function it would be good. The right way for me would be is to have a wrapper function which calls either lvm snapshot function or btrfs snapshot function based on how the user
19:33 raghu has configured the snapshots
19:34 raghu major: If you want, you can add a separate file "gluster-btrfs-snapshot.c" and add btrfs specific code there.
19:34 major in glusterd_btrfs_snapshot_remove() there is a "FIXME: begin" and "FIXME: end" comment which highlights the identical code between LVM and btrfs
19:34 major already is :)
19:34 major $ ls snapshot/*.[ch]
19:34 major snapshot/glusterd-btrfs-snapshot.c  snapshot/glusterd-btrfs-snapshot.h  snapshot/glusterd-lvm-snapshot.c  snapshot/glusterd-lvm-snapshot.h
19:37 raghu major: Ahh, great. Thats nice. Let me take a look at it. If you have any changes that you want to merge in your branch, pls go ahead. I will start looking into it. Will be happy to work with you to get those changes merged in the code base
19:37 raghu major++
19:37 glusterbot raghu: major's karma is now 1
19:40 major cool .. agian .. right now I am heavily into cleaning up what I have and de-dupping the copy/paste code .. hoping that by having support for btrfs and lvm I can find all the duplicate code and migrate that out to the generic side and really reduce the amount of work needed on in the lvm/btrfs/zfs specific sources
19:41 major atm Iam fairly confident I can get it down to 4 functions
19:41 major well .. 4 functions per target
19:42 major with any target-specific utility work hidden away as static
19:44 raghu major: Hmm. Thats great :)
19:46 major is there any docs outlining what needs to be done to fire off the automated tests? would like to be able to run the LVM tests to make certain I didn't break anything on that side
19:56 raghu major: I think the regular regression tests is good. Will check and let u know
20:11 major okay .. temp directories cleaned up properly .. was a "me" error in kicking out the patch from my integration branch
20:12 major hate that .. spend more time trying to understand why the code isn't working .. turns out it would work .. if it had been compiled :(
20:31 raghu joined #gluster-dev
20:32 raghu major: Sorry. My system hung and I had to reboot my machine. Did you have any other questions about snapshots?
20:34 raghu major: The tests (mainly regression tests) in the current codebase can be triggered. "prove -r tests"  after cding into the code base will start the tests.
20:49 rastar joined #gluster-dev
21:04 major do those have to be done from a system running glusterfs?
21:05 ndevos major: you run the tests only on a vm that you do not care too much about, they can be quite destructive
21:06 ndevos major: but there is also a run-tests-in-vagrant.sh script, maybe that works better for you
21:06 major cool
21:06 major I don't have anything in place for spinning on VM's on my devstation
21:06 major I usually target some remote device
21:06 major atm I been targeting a new cluster that has nothing on it
21:07 ndevos thats what I do too, I spin up 1 or more vms for testing, and then delete them when done
21:11 vbellur1 joined #gluster-dev
21:11 vbellur1 joined #gluster-dev
21:12 vbellur1 joined #gluster-dev
21:13 vbellur1 joined #gluster-dev
21:13 vbellur1 joined #gluster-dev
21:14 vbellur1 joined #gluster-dev
21:23 shyam joined #gluster-dev
21:53 bwerthmann joined #gluster-dev
21:54 bwerthmann joined #gluster-dev
22:50 vbellur joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary