Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 vbellur joined #gluster
00:15 rastar joined #gluster
01:07 Jacob843 joined #gluster
01:09 bwerthmann joined #gluster
01:26 moneylotion joined #gluster
01:59 shdeng joined #gluster
02:03 RameshN joined #gluster
02:07 vbellur joined #gluster
02:10 ahino joined #gluster
02:13 daMaestro joined #gluster
02:14 moneylotion_ joined #gluster
02:21 vbellur joined #gluster
02:24 rastar joined #gluster
02:25 EO_ joined #gluster
02:26 EO_ Does glusterfs support inotify?
02:46 moneylotion joined #gluster
02:46 JoeJulian EO_: I don't believe FUSE supports inotify.
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:51 magrawal joined #gluster
02:58 bwerthmann joined #gluster
03:02 Gambit15 joined #gluster
03:27 vbellur joined #gluster
03:30 Shu6h3ndu joined #gluster
03:31 moneylotion joined #gluster
03:34 EO_ https://github.com/libfuse/libfuse/wiki/Fsnotify-and-FUSE
03:34 EO_ iiinterestink
03:34 glusterbot Title: Fsnotify and FUSE · libfuse/libfuse Wiki · GitHub (at github.com)
03:35 kramdoss_ joined #gluster
03:42 Shu6h3ndu joined #gluster
03:44 itisravi joined #gluster
03:50 itisravi_ joined #gluster
03:52 sanoj joined #gluster
03:54 jiffin joined #gluster
04:13 gyadav joined #gluster
04:18 arpu joined #gluster
04:19 itisravi_ joined #gluster
04:21 aravindavk joined #gluster
04:21 nbalacha joined #gluster
04:30 ashiq joined #gluster
04:36 sanoj joined #gluster
04:46 ppai joined #gluster
04:50 nishanth joined #gluster
04:57 ashiq joined #gluster
04:59 sona joined #gluster
05:05 susant joined #gluster
05:08 BitByteNybble110 joined #gluster
05:15 chawlanikhil24 joined #gluster
05:15 kdhananjay joined #gluster
05:24 skumar joined #gluster
05:33 buvanesh_kumar joined #gluster
05:33 ndarshan joined #gluster
05:37 BuBU29 joined #gluster
05:39 apandey joined #gluster
05:40 Prasad joined #gluster
05:43 rafi joined #gluster
05:50 PatNarciso understanding QSFP is 4xSFP... is a QSFP port backwards compat with SFP?
05:51 EO_ negatory.
05:51 EO_ It's physically larger.
05:52 PatNarciso right on.  thanks.  I'm putting together an infiniband buildout estimate and... I'm considering 10g Ethernet again.
05:53 PatNarciso correction, my first infiniband estimate.
05:53 bwerthmann joined #gluster
05:55 ankitr joined #gluster
06:00 karthik_us joined #gluster
06:00 rafi joined #gluster
06:06 Philambdo joined #gluster
06:09 [diablo] joined #gluster
06:11 ankitr joined #gluster
06:13 skoduri joined #gluster
06:16 derjohn_mob joined #gluster
06:17 ankush joined #gluster
06:17 jkroon joined #gluster
06:18 rjoseph joined #gluster
06:19 jiffin joined #gluster
06:20 BitByteNybble110 joined #gluster
06:23 hgowtham joined #gluster
06:29 sbulage joined #gluster
06:29 karthik_ joined #gluster
06:32 mhulsman joined #gluster
06:32 RameshN joined #gluster
06:32 atinm joined #gluster
06:40 RameshN joined #gluster
06:41 apandey joined #gluster
06:42 msvbhat joined #gluster
06:43 rjoseph joined #gluster
06:44 itisravi joined #gluster
06:45 Saravanakmr joined #gluster
06:48 bwerthmann joined #gluster
07:02 karthik_ joined #gluster
07:02 rafi1 joined #gluster
07:04 chawlanikhil24 joined #gluster
07:05 chris349 joined #gluster
07:06 pasik joined #gluster
07:13 poornima_ joined #gluster
07:15 mbukatov joined #gluster
07:15 rajesh joined #gluster
07:27 Karan joined #gluster
07:30 jtux joined #gluster
07:31 sona joined #gluster
07:32 derjohn_mob joined #gluster
07:37 jtux joined #gluster
07:39 rastar joined #gluster
07:42 bwerthmann joined #gluster
07:42 BuBU29 joined #gluster
07:46 ahino joined #gluster
07:46 msvbhat joined #gluster
07:47 gyadav joined #gluster
07:54 ivan_rossi joined #gluster
07:54 ivan_rossi left #gluster
08:10 k4n0 joined #gluster
08:12 nishanth joined #gluster
08:29 d0nn1e joined #gluster
08:29 mhulsman joined #gluster
08:33 mhulsman joined #gluster
08:36 bwerthmann joined #gluster
08:37 ndarshan joined #gluster
08:38 msvbhat joined #gluster
08:38 fsimonce joined #gluster
08:47 rjoseph joined #gluster
08:49 flying joined #gluster
09:00 starryeyed joined #gluster
09:04 ankush joined #gluster
09:05 jiffin1 joined #gluster
09:18 ndarshan joined #gluster
09:30 bwerthmann joined #gluster
09:32 rjoseph joined #gluster
09:35 bwerthmann joined #gluster
09:38 kramdoss_ joined #gluster
09:42 Wizek_ joined #gluster
09:45 Philambdo joined #gluster
09:54 jiffin joined #gluster
09:56 gyadav joined #gluster
09:58 gyadav_ joined #gluster
09:59 kramdoss_ joined #gluster
10:04 MrAbaddon joined #gluster
10:07 MrAbaddon joined #gluster
10:08 derjohn_mob joined #gluster
10:09 Jacob843 joined #gluster
10:10 karthik_ joined #gluster
10:18 Seth_Karlo joined #gluster
10:19 Seth_Karlo joined #gluster
10:27 riyas joined #gluster
10:56 nh2 joined #gluster
11:01 Seth_Kar_ joined #gluster
11:10 masber joined #gluster
11:30 chris349 joined #gluster
11:36 bwerthmann joined #gluster
11:37 kramdoss_ joined #gluster
11:43 bfoster joined #gluster
12:00 ppai joined #gluster
12:00 derjohn_mob joined #gluster
12:01 kotreshhr joined #gluster
12:03 karthik_us joined #gluster
12:06 kotreshhr left #gluster
12:14 msvbhat joined #gluster
12:15 valkyr3e joined #gluster
12:27 sona joined #gluster
12:30 msvbhat joined #gluster
12:30 bwerthmann joined #gluster
12:34 Seth_Karlo joined #gluster
12:49 anbehl joined #gluster
12:59 sona joined #gluster
12:59 kpease_ joined #gluster
13:03 RameshN joined #gluster
13:11 RameshN joined #gluster
13:24 plarsen joined #gluster
13:24 bwerthmann joined #gluster
13:37 mb_ joined #gluster
13:39 plarsen joined #gluster
13:39 ira joined #gluster
13:43 unclemarc joined #gluster
13:48 RameshN joined #gluster
13:48 shyam joined #gluster
13:52 ahino joined #gluster
13:56 susant left #gluster
13:59 rafi1 joined #gluster
14:00 RameshN joined #gluster
14:08 RameshN joined #gluster
14:13 TvL2386 joined #gluster
14:15 flying joined #gluster
14:18 bwerthmann joined #gluster
14:25 bwerthmann joined #gluster
14:30 RameshN joined #gluster
14:33 bwerthma1n joined #gluster
14:33 rwheeler_ joined #gluster
14:39 Philambdo joined #gluster
14:39 jiffin joined #gluster
14:40 vinurs joined #gluster
14:57 kotreshhr joined #gluster
15:01 vbellur joined #gluster
15:03 sona joined #gluster
15:06 msvbhat joined #gluster
15:08 tdasilva- joined #gluster
15:13 RameshN joined #gluster
15:13 jiffin joined #gluster
15:14 shyam joined #gluster
15:19 RameshN joined #gluster
15:28 tdasilva- joined #gluster
15:28 decayofmind Hi! If I run "gluster volume heal NAME info" I'm getting two entries, but they are directories. What I can look for to get rid of those?
15:31 farhorizon joined #gluster
15:42 sanoj joined #gluster
15:43 atm0sphere joined #gluster
15:46 RameshN joined #gluster
15:50 chris349 joined #gluster
15:52 Shu6h3ndu joined #gluster
15:53 RameshN joined #gluster
15:53 buvanesh_kumar joined #gluster
16:00 farhorizon joined #gluster
16:01 atm0sphere joined #gluster
16:08 vbellur joined #gluster
16:09 vbellur joined #gluster
16:10 vbellur joined #gluster
16:11 vbellur joined #gluster
16:21 wushudoin joined #gluster
16:23 Gambit15 joined #gluster
16:23 RameshN joined #gluster
16:27 vbellur joined #gluster
16:28 vbellur joined #gluster
16:28 vbellur joined #gluster
16:29 derjohn_mob joined #gluster
16:29 vbellur joined #gluster
16:29 vbellur joined #gluster
16:33 StormTide joined #gluster
16:34 StormTide anyone know what package/how to get gluster-mountbroker command? Doesn't seem to show up in the 3.10 packages for ubuntu...
16:39 StormTide its referenced in https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/ ... but for life of me I cant figure out where its packaged.
16:39 glusterbot Title: Geo Replication - Gluster Docs (at gluster.readthedocs.io)
16:42 shyam joined #gluster
16:46 mlg9000_1 joined #gluster
16:47 rafi joined #gluster
16:53 idef1x joined #gluster
17:01 farhorizon joined #gluster
17:04 jiffin joined #gluster
17:04 k4n0 joined #gluster
17:04 farhorizon joined #gluster
17:07 anbehl joined #gluster
17:07 farhoriz_ joined #gluster
17:13 pioto joined #gluster
17:38 atinm joined #gluster
17:48 jiffin joined #gluster
18:05 mhulsman joined #gluster
18:05 farhorizon joined #gluster
18:07 baber joined #gluster
18:08 StormTide think i might have figured it out... might be a bug in the configure file re $host_os detection.. does anyone know who maintains the build infrastructure/builds the debs?
18:14 P0w3r3d joined #gluster
18:15 kpease joined #gluster
18:24 cholcombe joined #gluster
18:30 guhcampos joined #gluster
18:34 bwerthmann joined #gluster
18:35 jiffin joined #gluster
18:37 anbehl joined #gluster
18:42 StormTide might have figured it out.. the glusterfs-common package providers peer_mountbroker in /usr/lib/x86_64-linux-gnu/glusterfs/peer_mountbroker.py but does not create the /usr/sbin/glusterfs-mountbroker link... (its also missing a bunch of others re events api, gsyncd find missing files, etc... wonder if theres an extra step to create these links that is missing in the package script...
18:49 buvanesh_kumar joined #gluster
19:03 shyam joined #gluster
19:10 kotreshhr left #gluster
19:15 mhulsman joined #gluster
19:17 sona joined #gluster
19:18 d0nn1e joined #gluster
19:28 vbellur joined #gluster
19:33 jkroon joined #gluster
19:40 Karan joined #gluster
19:41 oajs joined #gluster
19:48 TvL2386 joined #gluster
20:02 ahino joined #gluster
20:05 a2 joined #gluster
20:12 baber joined #gluster
20:22 derjohn_mob joined #gluster
20:27 JoeJulian @kkeithle: ^^
20:33 kkeithley Maintain? pfft. I just build 'em. ;-
20:33 kkeithley ;-)
20:34 JoeJulian :D
20:34 kkeithley which version are we talking about
20:34 JoeJulian Looks like it's 3.10
20:34 bwerthmann joined #gluster
20:35 StormTide re, its 3.10
20:35 StormTide ive got a list of the missing links here im assembling.. i'll paste here
20:36 StormTide https://paste.fedoraproject.org/paste/r8jNG6X5IADzwLNwIOQxI15M1UNdIGYhyRLivL9gydE=
20:37 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
20:37 StormTide took some debugging but i'm now up to the generating ssh keys via gluster-georep-sshkey generate part...
20:39 kkeithley You're hired. When can you start?
20:39 StormTide am about to do the create/push-pem bit... but just reading up a bit more to make sure that its not gonna overwrite my master volume ... trying to replicate an existing volume to a new georeplicant
20:40 StormTide kkeithley: lol, sadly i'm just a php devops nerd here... fixing buildsscripts and building packages is not my talent ;)
20:41 StormTide i figured that out by apt-get source building the packages and comparing to what was being emitted in the deb's vs the /tmp/ build folder
20:42 kkeithley not bad for a php devops nerd.
20:43 kkeithley StormTide++
20:43 glusterbot kkeithley: StormTide's karma is now 1
20:43 StormTide thanks ;)
20:44 kkeithley /usr/libexec?  I didn't think Debian did /usr/libexec
20:44 StormTide it doesnt, but all the scripts have it hardcoded
20:45 StormTide /usr/lib/x86_64-linux-gnu/glusterfs) grep "usr/libexec" * -R -i
20:46 * kkeithley grumbles.  The the scripts are broken too
20:46 kkeithley sigh
20:46 kkeithley I should have stayed on vacation
20:46 StormTide ;) sorry to be bearer of bad news here
20:49 JoeJulian +1
20:50 major do the documents explicitly recommend that a brick be its own mount point?
20:50 StormTide also the docs at https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/ dont make it clear that before generate all the master peers have to have passwordless ssh setup
20:50 glusterbot Title: Geo Replication - Gluster Docs (at gluster.readthedocs.io)
20:52 farhorizon joined #gluster
20:53 StormTide do you know if: gluster volume geo-replication existing-volume user@slavehost::new-volume create push-pem .... will blow away existing-volume, or will it set it up to be copied to the georeplicant slave? (im hoping the latter but need to be sure)
20:55 Klas latter, but, to be sure, just test it
20:55 Klas (I have tested it, btw, in 3.7.14, just the other day)
20:56 StormTide i dont have a test rig setup here heh
20:56 Klas create a volume
20:56 Klas can be 10M =P
20:56 Klas then create a slave volume
20:56 StormTide yah, thats probably a good idea haha
20:56 Klas can also be 10M =p
20:56 Klas I just mean so that you can be sure for yourself ;)
20:56 Klas but, yeah, you needn't worry
20:56 Klas but I still would myself
20:57 Klas I hate trusting others when it comes to my data =P
20:57 StormTide ive got a backup, but it'd be a pain to restore it haha datasets like 500gb of small file nonsense
20:57 major Me too! Thats why I hand encode everthing I want to save onto rock slabs with a chisel and hammer!
20:59 farhoriz_ joined #gluster
21:00 BitByteNybble110 joined #gluster
21:05 major JoeJulian, okay .. soo .. I have generally figured out a way to mimic all the lvm stuff for btrfs .. though .. as it stands it wont work for some btrfs configs ... not w/out some extra fixups
21:06 farhorizon joined #gluster
21:09 intense joined #gluster
21:11 intense i am testing gluster volume file copy speed, mounting it with nfs3 and copying with rsync. When I copy a file, it is copying at relatively fast 250mb/s but i notice it drops to 9mb/s for brief periods, bringing the average down 150mb/s. i have observed this behavior consistently. is this a problem with gluster?
21:11 intense down to 150mb/s*
21:12 farhorizon joined #gluster
21:12 StormTide klas... .oO(root@node1 ~) gluster volume geo-replication testing-volume glfsgeo@backup.example.ca::testing-volume-geo create push-pem .... Unable to fetch slave volume details. Please check the slave cluster and slave volume. ... any idea what is going sideways?
21:12 arif-ali joined #gluster
21:15 intense i also notice that this only happens when i am copying one large file (10GB), but not 10GB of 250 smaller files...
21:17 StormTide if i do the slave status, | localhost |          UP | /path/glusterfs-test/(OK) | glfsgeo(OK) | glfsgeo(testing-volume-geo, real-content-geo)  | ... its all looking happy...
21:18 StormTide there is passwordless ssh setup between master and slave... blah
21:22 snehring StormTide: do you have the glusterfs ports open on the slave?
21:23 StormTide snehring: lemme doublecheck but i think i just added node1's entire ip as allowed
21:23 StormTide yah its open
21:25 StormTide the glusterfs log on node1 just logs 3 lines starting with [glusterd-geo-rep.c:2751:glusterd_verify_slave] 0-management: Not a valid slave
21:26 nirokato joined #gluster
21:26 StormTide https://github.com/gluster/glusterfs/blob/master/xlators/mgmt/glusterd/src/glusterd-geo-rep.c#L2751 hrm
21:26 glusterbot Title: glusterfs/glusterd-geo-rep.c at master · gluster/glusterfs · GitHub (at github.com)
21:27 snehring not the most helpful error
21:27 major JoeJulian, the core issue is ability to use subvol=<name> in the mnt_opts
21:27 StormTide i notice there's a gverify.sh... given my pathing issues earlier lemme see if thats not there
21:28 snehring if it is try to run it with the relevant arguments and see if it gives a more helpful message
21:28 baber joined #gluster
21:29 major so long as no one actiely mounts a btrfs subvolume with the subvol= mountopt, then it might all be pretty cut and dry
21:29 major actively* even
21:30 JoeJulian Hey, it's a start.
21:30 major so btrfs volume (or subvolume) /brick1/vol1/ is likely going to need to have a snapshot destination like /brick1/vol1-snapshots/<UUID>
21:31 major and I am just abusing the crud out of the rest of the lvm-style routines
21:31 cholcombe joined #gluster
21:31 JoeJulian +1
21:31 StormTide snehring: the file wasn't linked ... but still not working... hrm
21:31 major so snapshots are being mounted via subvol= off onto /run/
21:32 major by extending the mnt_opts .. otherwise I don't plan on actually mounting anything since it doesn't need it in that config .. just noop it
21:32 major it would also cause a headache to manage subvol= in the stored mnt_opts in the event of a restore :(
21:33 StormTide snehring: the only other error seems to be ... -glusterfs: failed to get the 'volume file' from server
21:33 major so I just plan on mangling it in-flight when mounting off to /run/ ...
21:33 snehring is the gluster service running on the slave and the volume started?
21:33 snehring StormTide ^^
21:34 StormTide snehring: its setup using the script...
21:34 StormTide all ive done on the slave is gluster-mountbroker setup <MOUNT ROOT> <GROUP> ... and gluster-mountbroker add slavevol geoaccount .... commands
21:35 StormTide i presume i dont also have to create a slave volume manually? (its handled by those commands right?)
21:35 baber joined #gluster
21:35 snehring pretty sure you still have to create the slave volume
21:35 major atm there is a serious chunk of LVM-specific code that has to be isolated out ..
21:35 StormTide snehring: hrm lemme see here
21:37 StormTide gluster volume status No volumes present ... fml
21:38 StormTide for some reason i thought the mountbroker was setting up a single server volume for this
21:38 snehring I think it just sets up the mountbroker relationship with a volume
21:40 nh2 joined #gluster
21:41 StormTide i might have screwed up then, seems to be hanging on volume start... think i'll have to delete the mountbroker first
21:47 major JoeJulian, right now glusterd_create_missed_snap() is causing me grief ...
21:50 major part of the issue really is that LVM tends to operate based on the VG names from /dev/, which doesn't help the btrfs side
21:50 major oh .. never mind .. found a work-around
21:53 farhorizon joined #gluster
22:01 farhorizon joined #gluster
22:06 balacafalata joined #gluster
22:06 balacafalata-bil joined #gluster
22:07 StormTide is there any way to bwlimit a geo-replicant
22:11 farhorizon joined #gluster
22:11 cholcombe joined #gluster
22:12 farhorizon joined #gluster
22:13 StormTide also just to confirm, geo-replication wont cause any sort of read locks to be taken on the filesystem while it sync's right?
22:14 JoeJulian unless someone else tells me I'm wrong, I believe it does not do any locking.
22:16 farhorizon joined #gluster
22:20 StormTide JoeJulian: cool.. also figured out the rsync bwlimt... but i didnt consider the number of jobs... so its 3x what i wanted ... but tis ok
22:20 JoeJulian +1
22:20 StormTide good news is its replicating now and doesnt seem to be breaking the live-connected users
22:25 StormTide so yah, tl;dr... the replication scripts are broken in 3.10/ubuntu but can be fixed with a bunch of symlinking, and the docs have some pretty serious flaws in two areas. (master node's all have to have ssh to each other not just to the one slave to pass generate step) and the slave needs its own gluster volume setup beyond the commands in mountbroker setup...
22:25 StormTide took a while to work through it all, but it seems to be working well
22:27 Gambit15 joined #gluster
22:28 major erm
22:28 major think I just found a memory leak
22:30 major yup
22:37 plarsen joined #gluster
22:41 major does the glusterd_create_missed_snap() code actually get called??
22:42 Vapez joined #gluster
22:43 JoeJulian Well there's a code path to it in glusterd_take_missing_brick_snapshots
22:44 major yah .. but a lot of the code seems .. inside out ..
22:44 major and it has a memory leak .. trying to review my patches to make certain I didn't introduce it
22:45 major yah .. not mine
22:47 vinurs joined #gluster
22:52 plarsen joined #gluster
23:09 bwerthmann joined #gluster
23:09 TvL2386 joined #gluster
23:18 balacafalata joined #gluster
23:20 farhorizon joined #gluster
23:34 major bah .. finally done with making it so you can hook in other snapshot interfaces
23:34 major I could only reduce it to needing to implement 5 functions :(
23:35 major I tried to reduce it further .. but the code is just a little too .. upside-down and inside out ...
23:35 major feels like something is gonna need to be seriously refactored to make it really play well w/ others
23:52 major damn .. the lvm changes are huge compared to the btrfs changes :(

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary