Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-04-11

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 major joined #gluster-dev
01:26 percevalbot joined #gluster-dev
01:46 ankitr joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
01:54 susant joined #gluster-dev
02:33 gyadav joined #gluster-dev
03:03 prasanth joined #gluster-dev
03:17 riyas joined #gluster-dev
03:22 nigelb kkeithley: no idea about gerrit slowness but your patch is in :)
03:28 susant left #gluster-dev
03:34 itisravi joined #gluster-dev
03:45 atinm joined #gluster-dev
03:55 skumar joined #gluster-dev
04:11 ashiq joined #gluster-dev
04:20 sona joined #gluster-dev
04:23 gyadav joined #gluster-dev
04:36 susant joined #gluster-dev
04:38 rafi joined #gluster-dev
04:43 susant left #gluster-dev
04:47 Shu6h3ndu joined #gluster-dev
04:52 kotreshhr joined #gluster-dev
04:54 jiffin joined #gluster-dev
05:05 ankitr joined #gluster-dev
05:05 skumar_ joined #gluster-dev
05:07 kdhananjay joined #gluster-dev
05:08 kdhananjay joined #gluster-dev
05:13 skoduri joined #gluster-dev
05:14 rraja joined #gluster-dev
05:16 jiffin joined #gluster-dev
05:20 skumar joined #gluster-dev
05:27 apandey joined #gluster-dev
05:27 amarts joined #gluster-dev
05:28 jiffin joined #gluster-dev
05:30 susant joined #gluster-dev
05:31 pkalever joined #gluster-dev
05:33 apandey_ joined #gluster-dev
05:36 jiffin joined #gluster-dev
05:39 susant left #gluster-dev
05:44 aravindavk joined #gluster-dev
05:44 hgowtham joined #gluster-dev
05:44 msvbhat joined #gluster-dev
05:53 sanoj joined #gluster-dev
05:54 nishanth joined #gluster-dev
05:55 rastar joined #gluster-dev
05:59 msvbhat joined #gluster-dev
06:01 kdhananjay joined #gluster-dev
06:06 prasanth joined #gluster-dev
06:07 sanoj joined #gluster-dev
06:14 jiffin joined #gluster-dev
06:15 kdhananjay joined #gluster-dev
06:20 kdhananjay left #gluster-dev
06:42 kdhananjay joined #gluster-dev
06:49 ankitr joined #gluster-dev
07:08 gyadav joined #gluster-dev
07:24 susant joined #gluster-dev
07:24 susant aravindavk++
07:24 glusterbot susant: aravindavk's karma is now 14
07:27 susant left #gluster-dev
07:33 msvbhat joined #gluster-dev
07:36 skumar_ joined #gluster-dev
07:39 devyani7_ joined #gluster-dev
07:47 skumar joined #gluster-dev
07:52 apandey__ joined #gluster-dev
09:02 ankitr joined #gluster-dev
09:06 ankitr joined #gluster-dev
09:25 kdhananjay joined #gluster-dev
09:25 ankitr joined #gluster-dev
09:43 ankitr joined #gluster-dev
09:54 pkalever joined #gluster-dev
09:54 kotreshhr left #gluster-dev
10:01 kdhananjay joined #gluster-dev
10:02 pranithk1 joined #gluster-dev
10:12 skumar joined #gluster-dev
10:12 ankitr joined #gluster-dev
10:19 pranithk1 xavih: hey I want to talk to you about https://review.gluster.org/16985, let me know when would be a good time
10:21 ankitr joined #gluster-dev
10:29 kkeithley nigelb, misc: strange gerrit voting on https://review.gluster.org/#/c/16804/.  It passed smoke, centos regression was aborted, but smoke received -1 !  how?
10:31 xavih pranithk1: tell me
10:32 pranithk1 xavih: hey! how are you?
10:40 pranithk1 xavih: I am not understanding when you say metadata is also updated if you are referring to xattr value of metadata or inode metadata
10:40 pranithk1 xavih: I will read your comments again with that context
10:40 ppai joined #gluster-dev
10:43 ashiq joined #gluster-dev
10:48 xavih pranithk1: inode metadata, which basically means an increment of the metadata counter in trusted.ec.version
10:48 pranithk1 xavih: oh you mean EC specific metadata, not XFS metadata.
10:49 xavih pranithk1: well, ec metadata is updated because there's a change in other metadata
10:50 xavih pranithk1: when you write to a file, the modification time is updated, so ec increments the metadata counter
10:50 pranithk1 xavih: true
10:50 pranithk1 xavih: You are saying there is a conflicting case when only DATA operation is done right?
10:55 nigelb kkeithley: that's my fault. fixing.
10:55 xavih pranithk1: yes, if a fop is declared to change only data, then we can have cases that doesn't work fine
10:58 pranithk1 xavih: Could you give an example? Let's call imaginary data-fop as DFOP
10:58 pranithk1 xavih: let's say DFOP updates only data and doesn't touch metadata
11:03 xavih pranithk1: this is the example I wrote in the comments
11:03 kdhananjay joined #gluster-dev
11:04 xavih pranithk1: if we send and xattrop only to the bricks that succeeded the DFOP, we can cause a damage in metadata because versions won't match on bricks that had helthy metadata but failed on a data change
11:05 ashiq joined #gluster-dev
11:06 pranithk1 xavih: Why won't they match? We didn't touch metadata versions at all right? We only increment DFOP versions
11:06 pranithk1 xavih: I mean only DATA version
11:07 xavih pranithk1: but we have eager-locking and we can have multiple data and metadata fops merged into a single round of updates
11:07 xavih pranithk1: so we need to update both data and metadata counters, but we cannot do that only on bricks that succeeded all data operations
11:08 pranithk1 xavih: okay. This issue exists even with normal transaction right?
11:08 xavih pranithk1: why do you mean y normal transaction ?
11:09 xavih pranithk1: *what
11:09 pranithk1 xavih: I mean the present code without any of Ashish' changes?
11:10 xavih pranithk1: Ashish' change covers one of the problems (case B in the comment I wrote) but not case C
11:10 pranithk1 xavih: Forget about Ashish' patch altogether. The issue you mentioned exists even without his patch right?
11:11 xavih pranithk1: yes, of course
11:11 pranithk1 xavih: ah! :-)
11:12 pranithk1 xavih: Okay now I don't have any confusion :-). Is that why you are saying we should have different good_masks i.e. data_good_mask, mdata_good_mask and we should send different xattrops based on these masks?
11:14 xavih pranithk1: yes, I think that could be a good solution
11:14 pranithk1 xavih: I think I got it now. I was under the impression that Ashish' patch is introducing some new problem and thinking in that direction and I am not finding any problem that is not there earlier :-)
11:15 xavih pranithk1: oh, sorry for the confusion...
11:15 pranithk1 xavih: don't say sorry man, it is fine :-)
11:16 xavih pranithk1: the problem was there and I saw it when Ashish wrote the patch, so I wanted it to be solved also
11:17 pranithk1 xavih: yeah yeah, now I got the whole thing. I kept thinking there is nothing that is new that went bad because of his patch apart from the bug you found
11:18 kkeithley nigelb++ for patching netbsd regression
11:18 glusterbot kkeithley: nigelb's karma is now 50
11:19 pranithk1 xavih: May be it is a good idea to send a single xattrop when we can. Only in the case good_data_mask, good_mdata_mask don't match then only for the differing ones we need to send extra xattrops
11:20 nigelb kkeithley: the -1 to smoke was my attempt at fixing the Jenkins voting issues.
11:20 nigelb Sadly, there appears to be a bug in Jenkins or the plugin, breaking it :(
11:20 kkeithley okay, no prob. just caught me by surprose
11:20 kkeithley surprise
11:23 kdhananjay joined #gluster-dev
11:24 pranithk1 xavih: Do you mind if we send this part of the solution as a separate patch? We would like to get a fix for just this 'healing' issue which will need less amount of testing that the complete solution.
11:24 pranithk1 xavih: than* the complete solution
11:25 jiffin joined #gluster-dev
11:26 rraja joined #gluster-dev
11:33 itisravi joined #gluster-dev
11:35 jiffin joined #gluster-dev
11:35 rafi joined #gluster-dev
11:38 apandey_ joined #gluster-dev
11:38 xavih pranithk1: it's ok
11:38 pranithk1 xavih: Cool. I just explained Ashish, he also got it. We will first send the healing change. After that is in, we will send different masks patch...
11:40 pranithk1 xavih: The moment I gave him example of one data fop succeeding on b1, b2 and metadata fop succeeding on b2, b3 and we updating versions only on b2 he got it :-)
11:40 pranithk1 xavih: oh wait, it won't even proceed now with the fop
11:41 pranithk1 xavih: which also needs to be changed right?
11:45 pranithk1 xavih: May be we should take 4+2 example, then it would be simpler to explain.
11:45 pranithk1 xavih: okay. So we will do this once the present issue is fixed.
11:50 skumar Reminder: Gluster Bug Triage will begin in 10 minutes..
12:02 skoduri joined #gluster-dev
12:04 misc I am doing a few change on the gerrit server on the proxy level to fix ssl certificate expiration
12:05 misc so it might show not found or the centos default page from time to time
12:05 misc I fix that as soon as I see, ie in the 20 seconds after the change :)
12:18 ndevos skumar++
12:18 glusterbot ndevos: skumar's karma is now 2
12:18 rafi skumar++
12:18 glusterbot rafi: skumar's karma is now 3
12:18 skumar kkeithley++ ndevos++ rafi++ . Thanks.
12:18 glusterbot skumar: kkeithley's karma is now 179
12:18 glusterbot skumar: ndevos's karma is now 346
12:18 glusterbot skumar: rafi's karma is now 62
12:25 skumar amarts ++
12:25 skumar amarts++
12:25 glusterbot skumar: amarts's karma is now 1
12:25 amarts this is for ? :-p i just was sitting there :-)
12:29 kkeithley skumar++
12:29 glusterbot kkeithley: skumar's karma is now 4
12:30 skumar amarts, for attending the meeting :)
12:33 samikshan nigelb: Thanks for the machine.
12:33 samikshan nigelb++
12:33 glusterbot samikshan: nigelb's karma is now 51
12:33 gyadav joined #gluster-dev
12:43 hgowtham joined #gluster-dev
12:44 rafi joined #gluster-dev
12:45 msvbhat joined #gluster-dev
12:51 ndevos misc: are you still working on Gerrit? I get errors while trying to review patches :-/
12:52 misc ndevos: I did finish, what error ?
12:52 ndevos hmm, when I click on a filename for a change, I get: The page you requested was not found, or you do not have permission to view this page.
12:52 misc ok
12:52 ndevos this is displayed in the gray pop-up kindof screen
12:52 misc I was fearing this would happen, but forgot how to reproduce
12:52 misc I will rollback the change
12:54 misc ndevos: does it work now ?
12:54 ndevos misc: yes!
12:55 misc ok so switching from mod_proxy to mod_rewrite have some issue
13:01 kotreshhr joined #gluster-dev
13:10 msvbhat joined #gluster-dev
13:12 ira joined #gluster-dev
13:14 nigelb misc: ah, yes. Don't do that.
13:14 misc nigelb: we did upstream, for a few good reasons (like letsencrypt)
13:15 misc nigelb: this as to do with the encoding ?
13:15 nigelb Yeah.
13:15 nigelb Play with staging first.
13:16 atinm joined #gluster-dev
13:16 rraja joined #gluster-dev
13:17 misc well, in this case, this was to renew letsencrypt so now this is renewed, I have 3 months to figure that :)
13:19 nigelb Ah.
13:19 nigelb We might want to do something special there.
13:19 nigelb Like have one path that doesn't go via apache.
13:20 nbalacha joined #gluster-dev
13:20 misc yeah, that's why
13:20 misc but iirc, mod_proxy do not support exclusion (or at least, not that I remember)
13:20 misc but I will find a way
13:27 shyam joined #gluster-dev
13:31 gyadav_ joined #gluster-dev
13:44 riyas joined #gluster-dev
13:54 ankitr joined #gluster-dev
13:59 nbalacha joined #gluster-dev
14:10 gyadav_ joined #gluster-dev
14:58 annettec joined #gluster-dev
14:59 atinm joined #gluster-dev
15:08 wushudoin joined #gluster-dev
15:12 msvbhat joined #gluster-dev
15:15 amarts joined #gluster-dev
15:41 gyadav joined #gluster-dev
15:43 skoduri joined #gluster-dev
15:50 major make it so
15:51 major so .. snapshots are expected to be read-only at the gluster level .. do we wawnt to enforce that at the filesystem level?
16:07 vbellur joined #gluster-dev
16:09 ankitr joined #gluster-dev
16:10 susant joined #gluster-dev
16:11 susant left #gluster-dev
16:25 vbellur joined #gluster-dev
16:26 vbellur joined #gluster-dev
16:26 vbellur joined #gluster-dev
16:27 vbellur joined #gluster-dev
16:27 vbellur joined #gluster-dev
16:28 vbellur joined #gluster-dev
16:29 ndevos major: ideally I would guess so, but what when a brick gets replaced that has a snapshort, it would need healing...
16:30 major hmm
16:30 major that brings me to my other question I suppose..
16:31 major the current restore "appears" to just mount the snapshot as the primary volume..
16:31 major why not make a clone of the snapshot and then mount that back as the primary?
16:31 major basically .. the current restore discards the old master volume, and the snapshot volume .. as a result of the restore
16:32 major or is there some method to do that via the CLI interface and combinations of snapshot and clones?
16:32 major can you snapshot2 from snapshot1 and then restore snapshot2?
16:33 * major thinks.
16:33 major need to go play with all of this to see what sort of combinations can be applied
16:36 * ndevos only uses LVM snapshots for VMs on his laptop, a 'gold' VM and snapshots with copy-on-write
16:36 major heh
16:36 major I think part of the problem I am having is that there are a lot of terms being ... redefined..
16:37 major or .. they are not the definitions I have used for the past 20 years ...
16:37 major not complaining .. just .. defending my stumblings ;)
16:37 ndevos it would be good to have a list of actions/definitions that are expected to work 'online', and which require an 'offline' mode
16:38 ndevos kkeithley: oh, obviously http://termbin.com/9yw7 contains some mistakes, the 1st if in glusterd_nfssvc_supported() is stupid
16:40 rafi joined #gluster-dev
16:41 rastar joined #gluster-dev
16:44 prasanth joined #gluster-dev
16:45 rafi joined #gluster-dev
16:45 major I think I am the only one even still looking at Issue #145
16:45 ndevos shyam: this looks like a DHT + gfapi problem, something you want to look at? https://bugzilla.redhat.com/show_bug.cgi?id=1438817
16:45 glusterbot Bug 1438817: medium, unspecified, ---, ndevos, NEW , cluster.extra-hash-regex option causes segfault
16:49 major anyway .. was sort of waiting for feedback on the whole snapshot issue before I proceeded to go tweak more code .. but at this point there have been no more posts/comments/ideas/anything in the last 9 days outside of mine .. might as well go back to working on the code
16:50 kotreshhr left #gluster-dev
16:50 major also .. is anyone currently working on: https://github.com/gluster/glusterfs-specs/blob/master/under_review/subdirectory-mounts.md ?
16:50 major because that is next on my list of things to play with
16:52 ndevos I'm not really following the snapshot changes, but I can also not find your patch(es) quickly - https://review.gluster.org/#/q/status:open+message:snapshot
16:53 major amarts created an issue on GitHub to discuss/track the future integration of said changes: https://github.com/gluster/glusterfs/issues/145
16:53 amarts major, i haven't replied to your comment on that issue
16:54 major amarts, yup, I know
16:54 amarts i am in favor of delaying 'early optimizations' for the actual code, which solves a real problem
16:54 amarts would like others to present their case first
16:54 major I made a comment before I went on vacation with the family, and when I got back there was still no headway .. so I figured I would write a more detailed post outlining my concerns/questions
16:55 major but .. I am also getting kinda antsy .. I figuring I will finish up my zfs changes real fast push those to github as well
16:55 amarts awesome
16:55 amarts that would be great
16:55 major that and I am still TOTALLY confused as to how to use the dictionary key system to store per-brick data
16:56 major I think part of the issue is that I was hoping there was a way to set the per-brick keys via the CLI
16:57 major and it looks like no such option exists .. which leaves me with a problem with the btrfs side of things
16:57 gyadav joined #gluster-dev
16:57 major wont effect anything else that I can tell
16:57 major maybe I should explain..
16:58 amarts it is configuration option?
16:58 kotreshhr1 joined #gluster-dev
16:58 major so .. btrfs doesn't use /dev/ paths to reference .. anything .. you perform snapshots against the mount point .. or the subvolume (btrfs subvol .. not gluster subvol)
16:58 kotreshhr1 left #gluster-dev
16:58 amarts or is it inside the code, then it should be part of brick_info_t type of structure?
16:59 major debian/ubuntu have a sort of trick they do which is sort of the equivilent of treating btrfs as a massive storage pool
16:59 * amarts is really hungry, would like to eat first and join back later
17:00 major kinda the equvilient of: mkfs.btrfs /dev/sda1; mount /dev/sda1 /btr; btrfs subvol create /btr/@home; mount /dev/sda1 /home -o subvol=@home
17:00 amarts ah!
17:00 amarts got it
17:00 major sooo .. I kinda of do the exact same thing for gluster, but off in /var/run/gluster/btrfs
17:00 amarts samikshan, would be of help there...
17:00 major basically .. there is this need for a btrfs-subvol-prefix='@'
17:01 major well .. I hard-code it as '@'
17:01 amarts that may not be a bad idea
17:01 major I would "like" it to be optional
17:01 amarts let me check with the team what is actually 'optional'. There are many small small hacks which have gone in over time
17:02 amarts so we need to understand if this is good enough to get the first cut, and improve later
17:02 amarts because we are anyways planning to retire current glusterd, and planning to move towards glusterfs/glusterd2
17:02 major honestly, if I had my way I would do: mkdir /gluster/<vol> and then automagically assume /gluster/<vol>/brick when creating the brick on the target path, that way I could have /gluster/<vol>/snaps, and such in that path as opposed to off in /var/run/gluster
17:03 amarts major, ok, i will be leaving for now, have to eat first... will catch up with you later
17:03 major no probs
17:03 major thanks for listening to my ramblings
17:03 amarts no probs, without discussions, no new ideas come to anyone
17:03 amarts :-)
17:03 amarts i will see if i can be online after having dinner to continue this discussion
17:04 amarts whats your timezone?
17:04 major US/Pacific
17:04 amarts ok, cool catch you later then
17:04 major though .. I am usually up until about 1am and wake back up near about 7am ;)
17:05 major and if I have a problem I can't solve then I am just up...
17:13 shyam ndevos: Will take a look, possibly after a couple of reviews that are in my plate ATM
17:15 ndevos shyam: sure, np, just leave a note or something when you get to it, I'll do so too when I find the time
17:21 major soo .. off in /var/lib/gluster/vols/<vol>/*.vol .. each brick-vol has a stack of sub-vols that contain 'option' entries .. and each subvol is in a chain .. do they have to be chained or can they have branches?
17:21 Shu6h3ndu joined #gluster-dev
17:21 major also .. can we add fs-specific subvols there?
17:22 major volume <vol>-btrfs ...
17:44 susant joined #gluster-dev
17:48 xavih joined #gluster-dev
17:57 major so .. regarding tracking the gluster/glusterfs repo ... when a pull-request is made .. is the preference for the branch to be rebased such that it cleanly applies to gluster/glusterfs?
18:01 major seriously .. going to send a crate of cricket bars to redhat and have them distributed them to every one of you...
18:01 * major wonders if it is extortion to threaten to send gifts like that..
18:02 major https://exoprotein.com/
18:02 * major grins.
18:03 misc mhh crickets
18:03 major https://chapul.com/
18:04 major I get them for my kids pretty regularly ;)
18:12 msvbhat joined #gluster-dev
18:13 major in case anyone deals heavily with git work-trees and topic-branches and deals with back-porting-hell .. I have the starts of a helper script that "almost" completely automates the process...
18:13 major https://gist.github.com/major0/dddbb0e3023b09fce33fd1b0bbb3f692
18:13 major though .. it is heavily tooled towards worktrees atm
18:16 kkeithley ndevos: re: http://termbin.com/9yw7.  That was your first take? Or what?
18:30 kkeithley ndevos: nm, I got it.
18:30 kkeithley and yes Virginia (and ndevos) there is a DIST_SUBDIRS
18:31 kkeithley gratuitous vague reference to an Americanism (Yes, Virginia, there is a Santa Claus)
18:31 kkeithley an obscure Americanism
18:40 msvbhat joined #gluster-dev
18:55 rafi joined #gluster-dev
19:00 susant left #gluster-dev
19:01 major soo .. yah .. in my "wishlist" of things that would generally make a lot of code easier to deal with .. my big one atm is that .. IMHO (very humble...) would be to have been to create a volume such as 'gluster volume create testvol replica 3 arbiter 1 node1:/gluster/testvol node2:/gluster/testvol node0:/gluster/testvol", at which point glusterd on each node creates the directory structure
19:01 major '/gluster/testvol/{brick,snaps,gfids,config,...' and generally most (all?) of /var/run/gluster/ could be gutted
19:01 major it also removes the "testing for mountpoint" dilema
19:01 major and generally reduces a huge stack of .. just .. code hackery
19:02 major snapshots and configs and everything are directly associated with their topdir, no more wondering which /var/run/gluster/snaps/* is for which volume...
19:02 major all the vol files and everything can be migrated out of /var/run/gluster/vols/
19:02 major just .. "nice"
19:03 major it also sort of resolves my entire btrfs subvol prefix problem
19:08 major hmm .. and all the config data gets effectively backed up during a snapshot as well
19:11 major question kinda becomes .. why not?
19:11 wushudoin joined #gluster-dev
19:23 major really that would be akin to just giving glusterd control of a volume storage pool and giving it wrapper interfaces to manage the backend storage for more than just snapshots
19:50 major yah .. a /gluster/ storage pool managed at the point of 'volume create', subdir-mounts, and general bliss
23:03 major anyone have any recommendations for commands to run against the volume to validate snapshots are feature complete?
23:05 major right now I am just doing: create, clone, restore
23:06 major should likely do details as well

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary