Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 major hurm .. just preventing glusterd from querying all these key/values that are irrelivant is another fun nightmare
00:09 major anyone know of the brickdir in relationship to a snapshot is the original brick's mountpath or the snapshots mount path?
00:30 plarsen joined #gluster
00:35 kramdoss_ joined #gluster
00:48 cloph_away joined #gluster
00:57 vinurs joined #gluster
00:58 vinurs joined #gluster
00:59 vinurs joined #gluster
01:00 vinurs joined #gluster
01:00 vbellur joined #gluster
01:01 vinurs joined #gluster
01:10 shdeng joined #gluster
01:16 saintpablo joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:50 jwd joined #gluster
02:52 arpu joined #gluster
02:56 Jacob843 joined #gluster
03:02 Gambit15 joined #gluster
03:22 kramdoss_ joined #gluster
03:25 BlackoutWNCT Hey guys, quick question about Gluster 3.10. Is it backwards compatible with the 3.8 client?
03:25 BlackoutWNCT As in, can a 3.10 client mount a 3.8 mount point?
03:47 atinm joined #gluster
03:51 nbalacha joined #gluster
03:59 dominicpg joined #gluster
04:05 poornima joined #gluster
04:05 itisravi joined #gluster
04:20 karthik_us joined #gluster
04:23 ksandha_ joined #gluster
04:30 susant joined #gluster
04:32 ppai joined #gluster
04:38 sanoj joined #gluster
04:41 RameshN joined #gluster
04:43 karthik_us joined #gluster
04:44 sanoj joined #gluster
04:45 rafi joined #gluster
04:46 buvanesh_kumar joined #gluster
04:50 Prasad joined #gluster
04:51 jwd joined #gluster
05:04 skoduri joined #gluster
05:06 rjoseph joined #gluster
05:06 skumar joined #gluster
05:07 BitByteNybble110 joined #gluster
05:08 Shu6h3ndu joined #gluster
05:12 gyadav joined #gluster
05:13 cholcombe joined #gluster
05:17 sbulage joined #gluster
05:21 ankitr joined #gluster
05:22 apandey joined #gluster
05:24 kotreshhr joined #gluster
05:25 aravindavk joined #gluster
05:29 ndarshan joined #gluster
05:31 k4n0 joined #gluster
05:33 prasanth joined #gluster
05:44 kdhananjay joined #gluster
05:48 riyas joined #gluster
05:50 jiffin joined #gluster
05:57 Saravanakmr joined #gluster
06:08 jwd joined #gluster
06:10 tchu joined #gluster
06:19 ksandha_ joined #gluster
06:22 hgowtham joined #gluster
06:26 skoduri joined #gluster
06:30 gyadav_ joined #gluster
06:42 magrawal joined #gluster
06:52 k4n0 joined #gluster
06:55 poornima_ joined #gluster
06:56 itisravi joined #gluster
07:01 mhulsman joined #gluster
07:06 jkroon joined #gluster
07:10 jwd joined #gluster
07:12 vinurs joined #gluster
07:13 mbukatov joined #gluster
07:13 hgowtham joined #gluster
07:15 martinetd joined #gluster
07:17 TBlaar joined #gluster
07:18 rjoseph joined #gluster
07:19 karthik_us joined #gluster
07:28 jtux joined #gluster
07:30 rastar joined #gluster
07:37 d0nn1e joined #gluster
07:41 mhulsman joined #gluster
07:41 major joined #gluster
07:42 derjohn_mob joined #gluster
07:51 ivan_rossi joined #gluster
07:54 skoduri joined #gluster
07:54 ashiq joined #gluster
07:55 Ulrar joined #gluster
08:01 [diablo] joined #gluster
08:08 msvbhat joined #gluster
08:09 gyadav_ joined #gluster
08:12 k4n0 joined #gluster
08:15 BitByteNybble110 joined #gluster
08:18 poornima_ joined #gluster
08:20 RameshN_ joined #gluster
08:28 fsimonce joined #gluster
08:33 tchu joined #gluster
08:37 fsimonce joined #gluster
08:37 sona joined #gluster
08:49 Saravanakmr joined #gluster
08:51 vinurs joined #gluster
09:04 anbehl joined #gluster
09:05 gyadav_ joined #gluster
09:07 derjohn_mob joined #gluster
09:10 ahino joined #gluster
09:12 flying joined #gluster
09:19 RameshN_ joined #gluster
09:25 prasanth joined #gluster
09:26 hgowtham joined #gluster
09:43 rafi1 joined #gluster
09:48 susant joined #gluster
09:48 hgowtham joined #gluster
09:58 ShwethaHP joined #gluster
10:01 vinurs joined #gluster
10:05 saybeano joined #gluster
10:19 kotreshhr joined #gluster
10:22 rafi1 joined #gluster
10:36 opthomasprime joined #gluster
10:49 spunge joined #gluster
10:52 nh2 joined #gluster
10:52 spunge Hey there, i've got a question. I see some talk about supporting zfs snaps around sept. 2016, but nothing after that. What's the current state of zfs snapshot support as an alternative to lvm snapshots? Are there any plans to support zfs snapshots in the future?
10:58 arpu joined #gluster
10:59 poornima_ joined #gluster
11:02 vinurs joined #gluster
11:04 Seth_Karlo joined #gluster
11:05 Seth_Karlo joined #gluster
11:40 vinurs joined #gluster
11:42 skumar_ joined #gluster
11:47 bfoster joined #gluster
11:50 amar` joined #gluster
11:52 jiffin spunge: rjoseph can give proper update on this. I can see following patches posted upstream for review https://review.gluster.org/#/q​/owner:sriramster+status:open
11:52 glusterbot Title: Gerrit Code Review (at review.gluster.org)
11:54 bfoster joined #gluster
11:55 derjohn_mob joined #gluster
11:58 msvbhat joined #gluster
11:59 amar` joined #gluster
12:15 msvbhat joined #gluster
12:20 cloph major: don't use stripe volumes
12:20 cloph @stripe major
12:22 skumar joined #gluster
12:24 skoduri joined #gluster
12:46 Philambdo joined #gluster
12:50 Ashutto joined #gluster
12:56 Karan joined #gluster
12:58 kramdoss_ joined #gluster
13:02 ahino1 joined #gluster
13:03 jiffin1 joined #gluster
13:04 TvL2386 joined #gluster
13:07 msvbhat joined #gluster
13:12 malevolent joined #gluster
13:14 kotreshhr left #gluster
13:15 Ionoxx joined #gluster
13:19 susant left #gluster
13:20 rjoseph spunge: One of our upstream user, sriram,  is working on the ZFS. He has sent some initial patches to add the ZFS support
13:21 rjoseph But before making those changes we have to do some changes in the current snapshot code so that the ZFS and later btrfs support can be added seamlessly
13:24 spunge rjoseph: I see, thanks for the information, so it could take a while. I will stick with LVM for the moment then. Keep up the rockin' work!
13:27 kpease joined #gluster
13:30 buvanesh_kumar joined #gluster
13:31 shyam joined #gluster
13:31 rjoseph spunge: Yes, it might take some time. Therefore I think you are stuck with LVM for some more time :-)
13:32 apandey joined #gluster
13:33 unclemarc joined #gluster
13:33 ankitr joined #gluster
13:38 level7 joined #gluster
13:45 malevolent joined #gluster
14:03 nbalacha joined #gluster
14:04 prasanth joined #gluster
14:04 vinurs joined #gluster
14:05 ahino joined #gluster
14:11 jiffin joined #gluster
14:12 squizzi joined #gluster
14:14 ira joined #gluster
14:15 shyam joined #gluster
14:17 annettec joined #gluster
14:20 atinm joined #gluster
14:24 saintpablos joined #gluster
14:31 skylar joined #gluster
14:52 Seth_Kar_ joined #gluster
14:54 Seth_Karlo joined #gluster
15:01 level7_ joined #gluster
15:01 plarsen joined #gluster
15:10 Seth_Karlo joined #gluster
15:10 Vytas_ joined #gluster
15:11 Seth_Karlo joined #gluster
15:13 Seth_Karlo joined #gluster
15:15 Seth_Karlo joined #gluster
15:17 Seth_Kar_ joined #gluster
15:18 Seth_Kar_ joined #gluster
15:20 Seth_Ka__ joined #gluster
15:23 Seth_Karlo joined #gluster
15:25 shyam joined #gluster
15:30 Wizek_ joined #gluster
15:36 farhorizon joined #gluster
15:37 riyas joined #gluster
15:42 Shu6h3ndu joined #gluster
15:52 ankitr joined #gluster
16:03 wushudoin joined #gluster
16:07 farhorizon joined #gluster
16:30 BatS9 BlackoutWNCT: Tested and can confirm that a 3.10 client can mount a 3.7 volume
16:30 BatS9 Not sure if it's a good idea, just confirming that it can be done
16:33 cholcombe joined #gluster
16:34 kpease joined #gluster
16:40 guhcampos joined #gluster
16:53 vbellur joined #gluster
16:57 cholcombe joined #gluster
17:00 opthomasprime joined #gluster
17:03 farhorizon joined #gluster
17:07 vbellur joined #gluster
17:08 vbellur joined #gluster
17:09 vbellur joined #gluster
17:09 vbellur joined #gluster
17:11 vbellur2 joined #gluster
17:11 vbellur joined #gluster
17:12 major cloph, why?
17:12 cloph @stripe
17:12 glusterbot cloph: (#1) Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes., or (#2) The stripe translator is deprecated. Consider enabling sharding instead.
17:12 vbellur joined #gluster
17:12 major ahh, that did pop up when you mentioned it last time
17:14 major erm, did not pop up
17:16 major cloph, thanks for the info
17:26 ankitr joined #gluster
17:27 riyas joined #gluster
17:31 vbellur joined #gluster
17:34 mhulsman joined #gluster
17:45 mhulsman joined #gluster
17:46 kraynor5b_ joined #gluster
17:48 StormTide joined #gluster
17:49 StormTide Any idea what would cause a request timeout at volume start? (peer status looks good....)
17:50 farhorizon joined #gluster
17:51 farhorizon joined #gluster
17:56 JoeJulian StormTide: depends which part is timing out.
17:59 StormTide JoeJulian: start command
18:00 StormTide think i might have figured it out though, possibly firewall/it not using the loopback address on the local host
18:00 StormTide yah thats it... i'd added the iptables rules for the remote nodes but not the node itself on the non-loopback address. it seems to be talking to itself over the lan ip ;)
18:00 JoeJulian Yep
18:03 ivan_rossi left #gluster
18:04 StormTide JoeJulian: is this a sane configuration for the new small file performance feature? http://pastebin.com/cF8dPx2W ... 2 servers 4 bricks...
18:04 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:05 JoeJulian I would at least add an arbiter.
18:05 StormTide is it necessary with a favorite-child-policy in place?
18:14 JoeJulian StormTide: There may be some settings needed to disable quorum.
18:15 sona joined #gluster
18:16 StormTide JoeJulian: interesting. anything i can rtfm on this? For this dataset it'd be ok if they both worked independently and then just mtime compared to fix split brains ... its primarily a read-only workload and a file exists or not kinda thing... webserver imagery behind a cdn..
18:16 JoeJulian +1
18:17 derjohn_mob joined #gluster
18:18 JoeJulian I have to look again. I think server quorum should now be disabled by default so you should just need to disable volume quorum. Look at "gluster volume set help" look for cluster.quorum*
18:20 StormTide JoeJulian: looks like quorum-type, default value = none, and quorum-count = null...
18:20 StormTide so that should allow write without quorum by default i think?
18:21 JoeJulian Not sure on the "null" part. I think you want 1. iirc when I looked at the code for that, 0 would use the default 50%.
18:22 JoeJulian or 50%+1, it's been a while.
18:22 StormTide ok so set the quorum count to 1, gotcha
18:22 JoeJulian test that to make sure i'm not lying.
18:22 shyam joined #gluster
18:23 StormTide no worries, it'll work set to 1 im sure... does that mean that type needs to be set to fixed from none?
18:23 JoeJulian Yes
18:24 StormTide perfect, thanks
18:24 StormTide any other gotcha's i should be looking out for here/
18:25 JoeJulian Just the standard "don't read from disk when you can read from cache" web advice.
18:25 StormTide yah this is just pre-thumbnailing data.
18:25 StormTide on first access they genearte thumbs which are served locally
18:26 StormTide same file is never hit twice by an app server kinda thing
18:27 major JoeJulian, on one of your posts a while ago there is a user message asking about support for a per-volume policy such that they always wanted 2 copies, but wanted to be able to specify 3+ servers such that should 1 server go offline, it would automatically start creating new copies on one of the spares
18:28 JoeJulian I suppose I didn't answer it.
18:28 major you replied a while later saying that something had finally been added that was similar to their request
18:28 JoeJulian Probably because it sounds unrelated to any article I've ever written. ;)
18:29 major it was in your replication Do's and Don'ts post, 4.5 years ago
18:29 JoeJulian Ah yes, just like it was yesterday... ;)
18:29 major ahh .. your reply 4 years ago was "no"
18:29 JoeJulian No, there's no such thing as automatic spares.
18:29 major but someone else repluied 10 months ago saying that it now supports that option...
18:30 JoeJulian I should cull that.
18:30 major https://joejulian.name/blog/glus​terfs-replication-dos-and-donts/
18:30 glusterbot Title: GlusterFS replication do's and don'ts (at joejulian.name)
18:30 major Chandan Kumar's question
18:30 JoeJulian Got it.
18:30 major it sounds like a hot-spare policy
18:31 major which wouldn't make sense w/out like .. 2 servers and an arbiter
18:31 major so .. 3 servers ..
18:31 major anyway .. I hadn't found anything supporting hotspares .. so the thread caught me off guard
18:32 mhulsman joined #gluster
18:33 JoeJulian I've started having some thoughts about how that should work based on my recent growing involvement with kubernetes. I'm sure it'll turn in to an rfe soon.
18:34 major I am still playing code catch-up
18:35 major and trying to sift through existing attempts to do this stuff .. AND trying to understand this newly discovered discussion about a socket plugin interface into glusterd
18:35 JoeJulian Have you found the stuff on glusterd 2.0?
18:36 major I dunno .. doing google site:lists.gluster.org searches trying to bitrat relivent information
18:37 major some of what I run across is pretty old .. sometimes there are URL's with repos with proposed changes that I fetch with a tracking branch and move on
18:37 JoeJulian Feel free to email maintainers directly if you need some specific knowledge about the code base. They're normally very receptive and helpful.
18:38 major yah .. I likely will once I have exhausted my sleuthing options :)
18:39 major and .. found another repo
18:40 major sigh .. these are not good repos
18:40 major topic branch fail
18:51 StormTide JoeJulian: what are the recommended mount options these days, noatime obviously.. but direct-io (defaults to off right?) etc?
18:52 StormTide do the bricks need atime enabled on their mounts?
18:52 JoeJulian noatime would be a brick mount option. The fuse mount doesn't matter.
18:53 JoeJulian When mounting on a server, I only have "x-systemd.require=glusterd.service".
18:54 JoeJulian If you have a workload that does a ton of lookups, #2 from ,,(php) might be good.
18:54 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-
18:54 JoeJulian @php
18:54 glusterbot JoeJulian: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-
18:54 JoeJulian hmm, why is it cutting that off now...
18:54 JoeJulian @more
18:54 glusterbot JoeJulian: Error: You haven't asked me a command; perhaps you want to see someone else's more.  To do so, call this command with that person's nick.
18:55 major its bitter
18:55 JoeJulian @factoids search php
18:55 glusterbot JoeJulian: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-
18:57 StormTide stat calls should be handled by the md-cache though right?
18:57 StormTide like on the dirs n such
18:58 StormTide like i think i have a basic understanding of what you guys have done with the small file cache/invalidation stuff and it seems reasonable... in this workload only thing that should end up going to the server is the actual file acces and not all the stat's on the dirpath that php would normally do...
18:58 JoeJulian Only if you're mounting via nfs.
18:58 JoeJulian fuse doesn't integrate with mdcache.
18:58 Vapez joined #gluster
18:58 Vapez joined #gluster
18:58 farhorizon joined #gluster
18:59 StormTide oh really?
19:01 farhorizon joined #gluster
19:01 StormTide im gonna have to look more closely at this 3.9+ stuff then... why wouldnt this be in the client?
19:02 JoeJulian How do you invalidate 10000 client caches when one of them changes a file? It's a little bit of a complex problem set.
19:04 major multicast subscription with a checksum of the most recent valid cache and update the checksum if 1 client changes it
19:05 JoeJulian I've argued for multicast since 3.0. No multicast support exists.
19:05 StormTide JoeJulian: a valid question... thought one you guys had solved lol ;) in this case im looking at a small number of clients and willing to give up write performance for quick reads... hrm
19:06 StormTide nfs can work i guess, but i was hoping for the failover setup without having to setup a watchdog
19:06 JoeJulian fyi, about the only code I've had committed are for spelling and grammar. I'm just a user.
19:07 JoeJulian 3.8+ have added features that should handle caching in the client.
19:07 StormTide im trying the 3.10 client with this...
19:07 JoeJulian Essentially, iirc, the servers keep track of what the client should have cached and sends invalidation messages as needed.
19:07 JoeJulian I /think/ that has to be turned on though.
19:08 StormTide so i saw this post: http://blog.gluster.org/2016/10/gluste​r-tiering-and-small-file-performance/
19:08 glusterbot Title: Gluster tiering and small file performance | Gluster Community Website (at blog.gluster.org)
19:09 major yah .. the upcalling
19:09 major it is disabled by default
19:09 StormTide looked like a reasonable approach to the problem...... does the 3.9+ client handle this caching so stat() doesnt have to hit the servers every time?
19:10 major can always toggle it and run tests before/after toggling and see what it does :)
19:11 StormTide but for clarity, is this feature in the fuse client?
19:11 JoeJulian yes
19:14 StormTide k good good, thought i missed something important there ;)
19:15 major Get to spin up my new servers tonight .. can't wait ..
19:17 StormTide does a client need any inbound ports open to make upcall work?
19:17 ahino joined #gluster
19:30 farhorizon joined #gluster
19:30 mhulsman joined #gluster
19:31 major is there any extra work necessary for safely shutting down gluster during powerfailure (and pressumably a notification from the UPS)?
19:32 major in particular .. how do you kick your clients off when the servers need to come down "now"
19:38 major you know .. for many systems that isn't a problem .. since most of the logic is on the server side .. but in gluster ...
19:46 major JoeJulian, okay .. I have found the most zfs/lvm snapshot cleanup code
19:46 major sadly .. it isn't exactly suitable for btrfs .. and it is one giant squashed history ..
19:47 JoeJulian <grumble>
19:47 major I can at least split it out into a couple of different topic branches that depend on one another so that there is less church, and merge them all into an integration branch
19:47 major s/church/churn/
19:47 glusterbot What major meant to say was: I can at least split it out into a couple of different topic branches that depend on one another so that there is less churn, and merge them all into an integration branch
19:48 farhorizon joined #gluster
19:48 major but for the most part the zfs cleanup follows the code-flow of the lvm side .. which still expects snapshots to exist on their own device and the like
19:49 ahino joined #gluster
19:49 major this one commit should have been at least 2-3 commits IMHO ...
19:51 JoeJulian That's probably due to how features are added through the gerrit review process.
19:52 major I dunno .. if this was reviewed by me I would have rejected it
19:52 major too much crap in 1 patch ..
19:52 JoeJulian You could probably go back to gerrit and see how it progressed.
19:52 major changes the build system AND adds features at the same time
19:52 JoeJulian :(
19:52 major yah .. I expect it would have stalled really
19:52 major and .. it wasn't touched since August .. soo .. bleh
19:52 major anyway .. regardless .. it can easily be split out and cleaned up
19:53 major besids .. as cute as it all is .. and it is distinctly IN the right direction .. it still doesn't help the btrfs case
19:54 major on the flip side .. I should have a published branch with this crap cleaned up an integrated with 3.10 before the end of the day friday ..
19:54 JoeJulian You rock.
19:55 major no .. I just have a nerosis for not being able to sleep when I start thinking about code problems :(
19:55 major some times I go days w/out sleep because of it
19:55 major soo .. it is in the best interest for my health to fix it asap :)
19:55 JoeJulian I used to be that way. I guess I just got old.
19:56 major or stopped making coffee after 2pm ..
19:56 tdasilva joined #gluster
19:56 JoeJulian It was Dr Pepper, but that goes along with getting old. Can't just consume calories non-stop like I used to.
19:56 major yah .. I hear that
20:00 farhorizon joined #gluster
20:06 major hurm ..
20:06 major they never added the Makefiles ..
20:06 major la sigh
20:08 Seth_Karlo joined #gluster
20:09 msvbhat joined #gluster
20:14 major just trying to get this new lvm restructuring to even compile
20:15 PTech joined #gluster
20:18 StormTide http://pastebin.com/NiNYSJkY <--- any idea what that means... getting some sort of auth failure when i try to mount
20:18 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:18 glusterbot StormTide: <-'s karma is now -4
20:18 StormTide glusterbot is cruel... those pastebins dont have expiry lol
20:18 JoeJulian Poor <-. Nobody likes it.
20:18 JoeJulian Sure they do
20:20 StormTide oic they happen after you paste
20:21 StormTide https://paste.fedoraproject.org/paste/zc4​cMJki~oIpZTAkmba-7F5M1UNdIGYhyRLivL9gydE= <-- there...
20:21 glusterbot StormTide: <'s karma is now -27
20:21 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
20:24 StormTide so i have auth.allow enabled, but all the right ip's are there... firewalls open both ways between machines...
20:31 mhulsman joined #gluster
20:36 major la sigh .. this code is so old that updating it to the changes between August and now is a chore ...
20:40 StormTide that errors definitely related to auth.allow, if i remove it, the volume mounts fine...
20:41 StormTide auth.allow is a csv of ip's without mask right? like set auth.allow 1.2.3.4,1.2.3.5
20:53 JoeJulian StormTide: I think so, yes.
20:54 JoeJulian StormTide: I've always just used iptables.
20:56 StormTide JoeJulian: pretty sure auth.allow is broken .. i can use iptables though and just treat it as open...
20:56 major grrr...
20:56 JoeJulian imho, that's safer than auth.allow. ssl auth if it's not a private network.
20:56 major they made changes to the code during the move of the functions ..
20:56 major my mind melts
20:59 StormTide yah, either im doing something really wrong or auth.allow is broken...
21:00 StormTide anyway resetting the param and reconnecting works, so i'll just use iptables and that should be fine
21:00 StormTide thx
21:03 major thank goddess I can force git to figure this out for me
21:04 StormTide JoeJulian: what do you do re x-systemd.require=glusterd.service when its not a glusterd server?
21:04 StormTide (ie theres no systemctl job to reference)
21:05 StormTide see some reference to sytemd automount...
21:05 JoeJulian StormTide: I don't. I assume the servers are up and running.
21:06 JoeJulian You could network-online.target
21:06 JoeJulian But I think that happens anyway.
21:07 StormTide gotcha, ok thanks.
21:13 msvbhat joined #gluster
21:16 jeffspeff joined #gluster
21:24 vbellur joined #gluster
21:25 vbellur joined #gluster
21:25 vbellur joined #gluster
21:26 vbellur joined #gluster
21:27 vbellur joined #gluster
21:27 vbellur joined #gluster
21:28 vbellur joined #gluster
21:28 vbellur joined #gluster
21:29 rastar joined #gluster
21:30 rastar joined #gluster
21:33 StormTide cool, up and running and the small file feature seems to be working as expected... performance is quite fast with an rsync compare
21:33 StormTide thanks for your help today JoeJulian
21:33 JoeJulian You're welcome.
21:54 StormTide this is actually working way faster than i expected. 5.2mb/s write of the small files, then rysnc scan across 6gb of smallfile data takes only 1.684 seconds...
21:54 JoeJulian +1
21:54 JoeJulian You should write up a blog post. :)
21:55 JoeJulian It's far too infrequently that people blog about things going right.
21:55 StormTide i may do just that after i get it tested out a bit more... still need to test failure/healing and live performance... but the rsync perf is impressive
21:56 Utoxin joined #gluster
21:58 major damn .. fighting the build system now
22:00 rideh joined #gluster
22:02 major and .. compiling :)
22:02 major okay .. soo .. might have zfs support compiling and testable before I get on the train to head south
22:03 major soo .. is there going to by any sort of support for mixed filesystem snapshots?
22:04 major you know .. brick1 has lvm, brick2 has zfs, brick3 is btrfs ..
22:04 JoeJulian What do you mean?
22:04 JoeJulian Maybe...
22:04 JoeJulian Tiering could make that desirable.
22:04 major okie dokie
22:05 farhorizon joined #gluster
22:18 major hah .. its a rough time when you are excited that you finally got 'make clean' to work .. :(
22:18 JoeJulian heh
22:21 major problem is I am working on an older tree in order to just get 1/3rd of this patch to apply and prove it is building w/out modifying any actual code
22:22 major want to get it compiling before I merge it with 3.10 .. last time I tried that I put myself in the corner and had to wear a funny had for 30 minutes
22:22 major hat*
22:23 JoeJulian Tuesday? If so that's ok. It was funny hat day.
22:23 major heh
22:37 major having problems locating the correct headers for some of these types >.<
22:40 major or .. pulling my head out of durp ville long enough to think about the problem clearly ..
22:51 baber joined #gluster
23:03 squizzi joined #gluster
23:12 major bamf .. compiled
23:14 major okay .. and with that cleaned and commited .. its time to ramp it all forward in time to 3.10 :)
23:15 JoeJulian Speaking of 3:10, isn'
23:15 JoeJulian n't that your train leaving now?
23:15 major no .. leaves in 10 minutes less than 3 hours from now .. but I have to be there .. roughly .. 30 minutes before it departs
23:15 major plenty of time ..
23:16 major and it is a 2 block walk away
23:17 major 99 software bugs in the code .. 99 software bugs .. take one down .. patch it around .. 168 software bugs in the code
23:18 major anyway .. this is all just the core of the moving of the LVM-specific code
23:18 major move the code .. and then make it all compile after the move
23:19 major next is figure out what was going on w/ the zfs stuff .. there are a stack of routines in there .. but they are not actively being called by anything
23:19 major so I dunno what was up
23:23 tdasilva joined #gluster
23:27 d0nn1e joined #gluster
23:32 vbellur joined #gluster
23:36 Jules-_ joined #gluster
23:49 farhoriz_ joined #gluster
23:55 major okay .. changes compiling against 3.10
23:56 major I have no idea if this code is even a good basis for doing this work .. but it is the newest available .. and there was a discussion about it Dec. 2016 .. sooo
23:56 major next up .. re-integrating the zfs portion

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary