Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:18 shdeng joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:01 derjohn_mob joined #gluster
02:11 nh2 joined #gluster
02:12 nh2 joined #gluster
02:21 armyriad joined #gluster
02:52 kramdoss_ joined #gluster
03:08 prasanth joined #gluster
03:29 magrawal joined #gluster
03:33 shdeng joined #gluster
03:52 atinm joined #gluster
03:52 riyas joined #gluster
04:01 buvanesh_kumar joined #gluster
04:01 itisravi joined #gluster
04:22 msvbhat joined #gluster
04:23 sanoj joined #gluster
04:36 gyadav joined #gluster
04:41 dominicpg joined #gluster
04:46 sbulage joined #gluster
04:46 Wizek_ joined #gluster
04:56 jiffin joined #gluster
05:12 Prasad joined #gluster
05:12 sona joined #gluster
05:14 jwd joined #gluster
05:14 [diablo] joined #gluster
05:18 Humble joined #gluster
05:23 msvbhat joined #gluster
05:23 ankitr joined #gluster
05:24 armyriad joined #gluster
05:25 skoduri joined #gluster
05:36 Philambdo joined #gluster
05:38 apandey joined #gluster
05:41 aravindavk joined #gluster
05:44 nishanth joined #gluster
05:53 skoduri joined #gluster
05:55 apandey joined #gluster
05:55 prasanth joined #gluster
05:55 apandey joined #gluster
05:59 apandey_ joined #gluster
06:12 kotreshhr joined #gluster
06:15 Karan joined #gluster
06:16 hgowtham joined #gluster
06:17 hgowtham joined #gluster
06:20 bulde joined #gluster
06:23 jtux joined #gluster
06:25 jtux left #gluster
06:28 aardbolreiziger joined #gluster
06:33 skoduri joined #gluster
06:35 [diablo] joined #gluster
06:59 kdhananjay joined #gluster
06:59 ivan_rossi joined #gluster
06:59 ivan_rossi left #gluster
07:03 mbukatov joined #gluster
07:11 msvbhat joined #gluster
07:16 R0ok_ joined #gluster
07:21 Philambdo joined #gluster
07:30 jkroon joined #gluster
07:31 ankitr joined #gluster
07:39 fsimonce joined #gluster
07:45 rafi joined #gluster
07:46 skoduri joined #gluster
07:49 aardbolreiziger joined #gluster
07:50 ankitr joined #gluster
07:51 flying joined #gluster
07:57 jkroon joined #gluster
08:02 ankitr joined #gluster
08:19 Philambdo joined #gluster
08:19 apandey__ joined #gluster
08:19 derjohn_mob joined #gluster
08:26 rastar joined #gluster
08:36 ppai joined #gluster
08:37 aravindavk joined #gluster
08:39 jwd joined #gluster
08:40 apandey_ joined #gluster
08:40 itisravi joined #gluster
08:42 rafi_mtg1 joined #gluster
08:46 kdhananjay1 joined #gluster
08:48 ayaz joined #gluster
08:50 aardbolreiziger joined #gluster
08:51 ayaz What would be the behaviour of GlusterFS if, in a replicated setup, one of the bricks were to run out of inodes?
08:52 aardbolr_ joined #gluster
08:58 kdhananjay joined #gluster
09:00 kotreshhr joined #gluster
09:02 sona joined #gluster
09:11 [diablo] morning guys
09:12 [diablo] guys when you've got a lot of volumes, is there a simple way to start all volumes, rather than start each one by one
09:12 [diablo] something like gluster volume start *
09:12 [diablo] likewise a stop also
09:13 apandey__ joined #gluster
09:19 cloph joined #gluster
09:34 rafi_mtg1 joined #gluster
09:36 Humble joined #gluster
09:36 apandey_ joined #gluster
09:36 itisravi_ joined #gluster
09:39 ankitr joined #gluster
09:40 rafi1 joined #gluster
09:50 kotreshhr joined #gluster
09:51 XpineX joined #gluster
09:52 Klas [diablo]: they should start automatically
09:52 Klas or, hmm
09:53 Klas not sure which scenario you are talking about
09:55 apandey joined #gluster
09:56 [diablo] hi Klas
09:56 [diablo] Klas well, we had issues with a server, showed the volumes started, but they weren't ... had to stop and start them all again
09:56 [diablo] with a for loop
09:57 [diablo] then they worked
09:57 Klas ah
09:57 Klas unless I'm mistaken, stopping glusterd, killing remaining processes, than starting gluster
09:57 Klas should bring you up to date
09:57 Klas or just rebooting server
09:59 [diablo] hmmm now failed: Another transaction is in progress Please try again after sometime.
10:00 [diablo] when trying to start a volume
10:01 [diablo] and when I just do a gluster volume status
10:01 [diablo] it just sits there
10:01 [diablo] FYI one node is off line
10:03 [diablo] gluster volume status
10:03 [diablo] Error : Request timed out
10:03 [diablo] took about 2 mins or more to return that
10:09 [diablo] Klas tried stopping service, killing processes, starting it...
10:09 [diablo] still volumes don't start
10:13 Klas hrm
10:13 Klas is this an issue across one or several nodes?
10:13 [diablo] well, we had to move a node to a new DC so it go powered off
10:13 Klas and how does your setup look like?
10:13 [diablo] 2 x nodes
10:14 [diablo] now on 1 x node
10:14 Klas so without quorum I presume?
10:14 [diablo] yup
10:14 [diablo] the 3rd machine only acts as the RH console
10:15 Klas unfortunately, I don't know how to help =(
10:16 [diablo] np
10:16 Klas it kinda reminds me of issues I had with SSL certs when things got out of synce
10:16 [diablo] we have a ticket with RH
10:16 Klas *sync
10:16 Klas I never solved them and could reliably recreate them in several versions of 3.7, so finally decided not to use that
10:17 rafi1 joined #gluster
10:18 social joined #gluster
10:26 [diablo] Klas tbh we have some major issues with gluster
10:26 Klas that seems to be the norm when it comes to network storage =P
10:27 * cloph has one major issue with gluster, and that is with geo-replication https://bugzilla.redhat.com/show_bug.cgi?id=1431081 - the rest is thankfully fine :-)
10:27 glusterbot Bug 1431081: high, unspecified, ---, bugs, NEW , symlinks trigger faulty geo-replication state (rsnapshot usecase)
10:45 [diablo] :)
10:46 LiPi joined #gluster
10:47 LiPi Hello! I have some conceptual problems that I don't get resolved. I have a 2 node gluster setup with replication enabled, and I don't get what happens when one of both nodes dead
10:47 LiPi I though that with replica all data was in both servers, so if one server had its hard disk broken, I won''t lose anything
10:48 LiPi but I did the test, I killed one server and I couldn't access the data anymore
10:53 sona joined #gluster
10:53 Philambdo joined #gluster
10:56 bfoster joined #gluster
10:58 buvanesh_kumar joined #gluster
10:59 jiffin LiPi: did u enable quorum?
11:00 LiPi how can I check it?
11:01 LiPi ok
11:01 LiPi found it
11:02 LiPi I don't see it..
11:03 LiPi is it safe to set it to a 50% value?
11:05 LiPi In a 2 server cluster it does not make sense...
11:06 cloph no, should be 51% for three nodes. Otherwise not possible to determine net-split or real issue... lower values only make sense if you have way more peers. And yes, in a 2 server cluster you cannot use quorum
11:07 LiPi so it is not possible in a 2-node set to mantain a copy of files in both servers?
11:11 cloph that is not what quorum does.
11:12 cloph quorum keeps the volume in fully operational state as long as quorum is met.
11:12 cloph When quorum is no longer met, the volume is switched to read-only mode.
11:13 cloph the replication is specified by the volume type/the layout of the bricks, and you can have a replica 2 volume with bricks on only two nodes without problem.
11:13 cloph (but you should consider adding a third node and make that an arbiter)
11:13 LiPi this is what I am trying .. but I do not see my files replicated then.
11:14 cloph with three nodes, one can be taken down for maintenance without affecting the volume.
11:14 LiPi and with 2 nodes..?
11:15 LiPi with 2 nodes I would expect that if one fails the system is put in r/o or shut down
11:15 LiPi but I also expect to not lose the data
11:15 cloph with two nodes you could still add an arbiter brick on either of the nodes. If you put it on a different disk-array/separated from the data brick it can help in resilience in case you have to replace one of the data-disks
11:15 cloph but it won't help if the node itself gets down/is disconnected from the pool for some reason.
11:16 LiPi Type: Replicate
11:16 LiPi , Brick1: mon1:/glusterfs/brick1/gluvol0
11:16 LiPi , Brick2: mon2:/glusterfs/brick1/gluvol0
11:16 cloph with two nodes you could also have it continue running when the first brick is available.
11:16 LiPi only one brick contains data
11:16 LiPi coming back in 30 min.
11:17 cloph volume status for the volume would be interesting, and what the Number of Bricks line inf the volume info reads - is it consistent with the replicate?
11:31 giovanne joined #gluster
11:34 giovanne_ joined #gluster
11:37 Karan joined #gluster
11:39 kotreshhr left #gluster
11:41 atinm joined #gluster
11:45 Philambdo joined #gluster
11:55 jkroon_ joined #gluster
12:01 nh2 joined #gluster
12:01 jkroon_ joined #gluster
12:13 DV joined #gluster
12:17 LiPi [MON1][root@mon1 ~]# gluster volume info
12:17 LiPi
12:17 LiPi Volume Name: gluvol0
12:17 LiPi Type: Replicate
12:17 LiPi Volume ID: dfa83966-76d3-4b3e-9b3b-7867655b15da
12:17 LiPi Status: Started
12:17 LiPi Number of Bricks: 1 x 2 = 2
12:17 LiPi Transport-type: tcp
12:17 LiPi Bricks:
12:17 LiPi Brick1: mon1:/glusterfs/brick1/gluvol0
12:17 LiPi Brick2: mon2:/glusterfs/brick1/gluvol0
12:17 LiPi [MON1][root@mon1 ~]# gluster volume status
12:17 LiPi Status of volume: gluvol0
12:17 LiPi Gluster processPortOnlinePid
12:17 LiPi ------------------------------------------------------------------------------
12:17 glusterbot LiPi: ----------------------------------------------------------------------------'s karma is now -21
12:17 LiPi Brick mon1:/glusterfs/brick1/gluvol0N/ANN/A
12:17 LiPi Brick mon2:/glusterfs/brick1/gluvol049152Y4797
12:17 LiPi NFS Server on localhostN/ANN/A
12:17 LiPi Self-heal Daemon on localhostN/AY16620
12:17 LiPi NFS Server on mon2N/ANN/A
12:17 LiPi Self-heal Daemon on mon2N/AY4811
12:17 LiPi
12:17 LiPi Task Status of Volume gluvol0
12:17 LiPi ------------------------------------------------------------------------------
12:17 glusterbot LiPi: ----------------------------------------------------------------------------'s karma is now -22
12:17 LiPi There are no active volume tasks
12:17 LiPi cloph, sorry for the paste, i should put it in pastebin instead
12:17 LiPi but here there's the info
12:18 unclemarc joined #gluster
12:18 cloph there you have it, brick on mon1 is not up, at least not from gluster's point of view
12:24 gyadav joined #gluster
12:34 LiPi uff
12:34 LiPi yes you are right
12:34 LiPi I tried before to start it with volume start force
12:34 LiPi but It didnt do anything..
12:34 LiPi now I retried and it works
12:34 LiPi very straneg
12:38 prasanth joined #gluster
12:40 [diablo] guys, we've just got our 2nd node racked and up
12:40 [diablo] what command can I run to see the state of the sync please?
12:42 cloph if you mean syncing stuff to restore replica: use the volume heal info/status.. commands
12:43 Asako joined #gluster
12:44 Asako Hello.  I'm seeing an error in my logs which appears to be a missing translator.  0-xlator: /usr/lib64/glusterfs/3.10.1/xlator/features/ganesha.so: cannot open.  Is there a way to fix this?
12:44 baber joined #gluster
12:47 [diablo] thanks cloph
12:52 kramdoss_ joined #gluster
12:52 gyadav joined #gluster
12:55 rwheeler joined #gluster
12:57 lanwatch left #gluster
13:09 shyam joined #gluster
13:10 Wizek_ joined #gluster
13:11 plarsen joined #gluster
13:12 prasanth joined #gluster
13:16 Wizek_ joined #gluster
13:20 Philambdo joined #gluster
13:21 snarwade joined #gluster
13:34 atinm joined #gluster
13:37 ira joined #gluster
13:37 nbalacha joined #gluster
13:41 vbellur joined #gluster
13:41 kpease joined #gluster
13:41 arpu joined #gluster
13:53 skylar joined #gluster
13:56 squizzi joined #gluster
13:59 farhorizon joined #gluster
14:04 vbellur joined #gluster
14:26 flying joined #gluster
14:26 [diablo] guys n gals... can anyone tell me if gluster controls the starting and stopping of the rpc-statd , like ctdb controls the smbd
14:26 [diablo] or does it remain systemd
14:28 flying joined #gluster
14:32 farhorizon joined #gluster
14:37 kramdoss_ joined #gluster
14:46 shyam joined #gluster
14:57 riyas joined #gluster
15:02 wushudoin joined #gluster
15:03 wushudoin joined #gluster
15:09 jiffin joined #gluster
15:12 Karan joined #gluster
15:25 vbellur joined #gluster
15:38 susant joined #gluster
15:43 jiffin joined #gluster
15:49 susant joined #gluster
15:52 gyadav joined #gluster
15:53 rastar joined #gluster
16:15 farhorizon joined #gluster
16:19 [diablo] guys got a massive CPU load going on for ages http://pasteboard.co/2FiXzXFhT.png
16:19 glusterbot Title: Pasteboard — Uploaded Image (at pasteboard.co)
16:19 [diablo] any ideas?!
16:21 timotheus1 joined #gluster
16:24 Gambit15 joined #gluster
16:43 farhorizon joined #gluster
16:54 jiffin1 joined #gluster
17:00 JoeJulian [diablo]: Check the brick logs maybe?
17:00 [diablo] hi JoeJulian
17:00 [diablo] will do, just started an 8 ball pool :D
17:00 [diablo] when I won or lost I'll check
17:04 gyadav joined #gluster
17:05 Asako heh, I'm also seeing high cpu load
17:08 [diablo] hi Asako really?
17:08 [diablo] like how high?
17:12 jiffin1 joined #gluster
17:15 Asako about 50% constant
17:15 Asako that's mostly from ganesha though
17:25 Tanner_ joined #gluster
17:26 jiffin joined #gluster
17:27 Tanner_ I have a use case I'm not sure how to do with Gluster, basically we have our main volume used in prod, and every weekend we want to take a snapshot of that volume, and create/overwrite/restore that snapshot, giving us a copy of prod data to work from
17:28 ankitr joined #gluster
17:31 Tanner_ previously when we were using NFS and AWS EBS volumes I wrote a script that took a snapshot of the volumes, then created volumes from those snapshots and used the btrfs filesystem sync command
17:35 cliluw joined #gluster
17:46 bulde joined #gluster
17:52 jiffin joined #gluster
17:53 jkroon joined #gluster
18:03 major gluster officially supports snapshots when using lvm+xfs
18:04 [diablo] so, I've got one brick using 1000% CPU
18:04 major and there is some experimental work in supporting snapshots w/ btrfs (and shortly zfs) .. but I would wait for those to be officially supported before using them in production
18:05 [diablo] I see information logging on the brick, but not errors
18:06 [diablo] oh pardon me, I see occasionally [2017-04-10 17:58:28.772155] E [MSGID: 113091] [posix.c:4125:posix_get_ancestry_non_directory] 0-data-posix: null gfid for path (null)
18:07 major [diablo], I personally find the heal status to be kind of vague .. I kept fretting over the output and trying to figure out if it was even doing anything... particularly when I kicked a 'heal all' .. it seemed like it was just growing a list of gfids as opposed to cleaning it up ..
18:07 major but .. it did clean itself up
18:07 major it just took a long time
18:07 [diablo] major well the CPU is sky high mate....
18:07 [diablo] http://pasteboard.co/2FiXzXFhT.png
18:07 glusterbot Title: Pasteboard — Uploaded Image (at pasteboard.co)
18:08 major yah .. mine was as well
18:08 [diablo] and the bulk is focused on that "data" brick
18:08 [diablo] it's unusable also
18:08 major you are running only 2-way replication? not 2+1 (2 replica + 1 arbiter) ?
18:09 [diablo] yup
18:09 [diablo] two way
18:09 major right .. so you "have" to have 2 nodes on-line to get qurom .. and they generally need to be in agreement about any and all files
18:09 [diablo] nod
18:09 [diablo] both are online
18:10 major well .. online "and" in agreement
18:10 major requesting a "heal all" basically tells the systems to compare every file and validate that both ndoes agree
18:10 [diablo] hmmm major how can I check the agreement
18:10 major thats what the heal is all about
18:11 major and why the CPU load is so high
18:11 major they are comparing all the files and making certain they are in agreement
18:11 [diablo] ah
18:11 major just have to be patient
18:11 [diablo] one node had been off about 4 days
18:11 major hurm
18:11 [diablo] only got powered on again today
18:12 [diablo] I just ran gluster volume heal data info
18:12 [diablo] and saw file names flying by
18:12 major another thing I noticed was that the status output seemed to only report automatic heals (every 10 minutes) in so much as their start+finish time
18:12 major manually triggered heals never appeared in my status output
18:12 major ultra vague and non-intuitive
18:12 [diablo] I assume that's the files it's healing?
18:13 major I "think" that the gist of the heal is to sort of plow through all the files and just "flag" them as needing validation
18:13 major which lets another thread go through and start validating them
18:13 major kind of a producer/consumer model
18:13 [diablo] k
18:13 major but I haven't looked directly at the code to confirm that .. just sort of my assumption based on the behavior I was seeing
18:14 [diablo] there's a load of tiny JPG files on that volume
18:14 major pretty much all of my knowledge of the gluster code is off in the snapshot side atm
18:14 JoeJulian The high cpu load in that case is probably going through dirty files and calculating hashes for chunks. Then those hashes are compared and, if different, the data is migrated.
18:14 major but yah .. it was a lot of CPU and a lot of files .. and most of the CPU load was on the data bricks (almost no load on the arbiter for me)
18:21 [diablo] major cheers for the info...
18:21 cliluw joined #gluster
18:21 [diablo] i'll keep an eye on it for a few hours...
18:21 [diablo] guess I can't get a % finished out of it
18:21 major yah ... mine took a while ..
18:21 major I also observed some .. odd behavior on the clients while a heal was going on
18:22 major in particular w/ regards to git
18:22 major had me panicking a bit
18:24 [diablo] ok
18:27 major 'git status' was claiming the repo was corrupt .. I suspect it was because the file was marked as invalid or something similar in order to facilitate the heal
18:27 major once the heal was done git was happy again
18:28 major but it concerns me in that there is an imlpication that glusterfs is not entirely 'usable' while a 'heal all' is in progress
18:29 major funny part is that I did a heal all on a known good filesystem that had no reported problems
18:29 JoeJulian odd
18:30 JoeJulian I've never had a usability problem during heal.
18:30 JoeJulian ... but then again, I always disable client-side heals
18:34 major I dunno, ran the heal from the node
18:35 major of all things I was working on the btrfs snapshot code while the heal was going on and did a 'git status' during the heal and it reported that the git repository was corrupt
18:35 major and I had a moment of panic
18:35 major that and the thousands of gfid's being reported w/out a file name and everything else .. think it made it up to like 163k files in the status report
18:37 major was so concerned that I was about to have to manually reconstruct the filesystem that I started looking up everything and its cousin .. found lots of references to: https://gist.github.com/semiosis/4392640 in regards to recovering files from gfids
18:37 glusterbot Title: Glusterfs GFID Resolver Turns a GFID into a real path in the brick · GitHub (at gist.github.com)
18:38 major anyway .. it sorted itself out in the end .. but between the insanely high CPU load and my lack of understanding as to how the heal process worked (and what I feel is a pretty vague status report from the gluster tool) .. anyway .. its over now .. and my therapist says that adding more whiskey to the coffee will help me forget ;)
18:47 Karan joined #gluster
18:55 Tanner_ major, is it possible to do a gluster snapshot/restore onto a separate cluster?
18:56 major glusters snapshot code is generally just a wrapper to the file-storage snapshots
18:56 major restore just points the volume at the snapshot
18:57 major that said .. I can imagine ways to do something like a btrfs send/recv to recreate a volume in a different cluster
18:57 major but currently I don't think that is even on the drawing board
18:58 major likely better off just using geo-replication off to another cluster
18:59 Tanner_ the problem with that is that they want the data on the temp volume to stay the same until the next restore
18:59 Tanner_ as some data is added to it throughout the week
19:00 major what are you trying to do exactly?
19:00 Tanner_ :) I was just thinking I should explain it better, maybe there is a better way
19:00 major I have a few features I was looking at adding to the snapshot code ;)
19:00 Tanner_ basically our QA team wants a copy of prod data
19:00 major for instance .. I can't temp-mount a snapshot in order to recover data previously backed up
19:00 Tanner_ so once a week (on our older system) we took a snapshot, and created volumes for them
19:01 msvbhat joined #gluster
19:01 Tanner_ they do their thing on it, then next Friday comes, and we refresh/revert that volume so they have any new data added
19:01 Tanner_ it erases their stuff but it is all done via scripted API calls so it works for us
19:02 Tanner_ now the other thing is that we want to run these volumes on mag drives, not SSDs
19:02 major so you update a master volume, then create a snapshot, they work in the snapshot, rinse/repeat
19:03 Tanner_ so basically what I should do is add enough magnetic storage to the cluster, then script snapshot/restores onto it?
19:03 major sort of a volume sandbox
19:03 Tanner_ we are also using heketi
19:04 major mag tape or like .. magneto-optical?
19:04 major they even make MO's big enough for this sort of thing anymore?
19:04 Tanner_ heh, magnetic platters
19:04 Tanner_ from a spinning drive..
19:04 major ahh
19:04 Tanner_ like.. a regulard HDD, sorry
19:04 major damn .. I was all excited for a second .. glass media's shelf-life is to die for
19:05 major anyway .. back on topic
19:05 major heh
19:05 Tanner_ I'm also dealing with about 3.5-4TB of data here
19:06 major so the API is updating and snapping a sandbox volume from a master-volume
19:06 Tanner_ two disparate systems, QA uses API to update the data, snapshot and restore was done via AWS API + bash
19:07 major kinda on-par with doing LXC containers from a master volume and running an overlay FS for container-specific changes
19:07 major once a week update the master, resnap each containers copy, re-apply container-specific changes
19:07 Tanner_ yeah a CoW snapshot/volume would be perfect
19:07 major yah .. soo .. totally on my todo list for glusters snapshots ..
19:08 major just requires mounting a snapshot as a new gluster volume
19:08 major which also lets me keep my old volume instead of dropping the data about it
19:08 major I can't think of a good way to do that w/in gluster "right now"
19:10 major but yah .. being able to mount snapshots w/out "restoring" them would likely be the solution you are looking for
19:11 major or even making snapshots of snapshots and mounting those
19:11 major update master, create read-only snapshot of master (for archive purposes), create read-write snapshot of read-only snapshot, mount read-write snapshot as new volume
19:12 major that and mounting subdirectories of volumes ...
19:14 Tanner_ major, what about about this (ctrl+f: Accessing Snapshots): https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Snapshot_Commands.html
19:14 glusterbot Title: 12.2. Snapshot Commands (at access.redhat.com)
19:15 Tanner_ ah, read only
19:16 major really hates /var/run/gluster/
19:16 vbellur Tanner_: is cloning what you need - https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.7/Clone%20of%20Snapshot.md ?
19:16 glusterbot Title: glusterfs-specs/Clone of Snapshot.md at master · gluster/glusterfs-specs · GitHub (at github.com)
19:17 Tanner_ vbellur, exactly
19:17 major yup
19:17 major snappity ..
19:18 major double snappity
19:18 Tanner_ I want to clone a volume without doing a slow tar/rsync/cp when filesystem level tools could be used
19:18 * Tanner_ grumbles about small files
19:18 major suddenly a huge swath of code has been explained to me
19:19 major Tanner_, the snapshot code will do the snapshots at the backend storage level
19:19 major current official support requires the bricks use thin-provisioned LVM+XFS
19:19 major then you can do the snap->clone
19:20 Tanner_ I am embarrased to admit I don't actually know what FS we are running on, it was provisioned via heketi
19:20 Tanner_ so LVM+?
19:20 major and .. I need to go beat up some btrfs snapshots so the default snaps are read-only, and make clones be read-write ;)
19:20 major http://blog.gluster.org/2014/10/gluster-volume-snapshot-howto/
19:21 major https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
19:21 glusterbot Title: Setting Up Volumes - Gluster Docs (at gluster.readthedocs.io)
19:22 major I am working on support for btrfs and zfs .. but as mentioned, the snapshot code is generally just a wrapper to the backend storage commands (lvcreate, btrfs, zfsadmin)
19:23 major okay .. so I can also scratch a todo item off my list .. it wasn't a deficiency in the code so much as a deficiency in my understanding of it
19:24 Tanner_ yeah so we are using LVM so we can do snapshots
19:24 major yah, as long as it is thin-provisioned lvm
19:24 Tanner_ it is, that is what heketi does
19:24 major good stuff
19:24 Tanner_ so now I'm kinda confused
19:28 major ?
19:37 absolutejam joined #gluster
20:03 farhoriz_ joined #gluster
20:05 Tanner_ ok, so I think what we will do is setup geo-replication to another node, and then do our cloned volumes from the slave node
20:05 Tanner_ since just cloning the volume would apparently only create the cloned volume on the master node would affect the prod volume when QA starts hitting it
20:30 baber joined #gluster
20:38 vbellur joined #gluster
20:39 vbellur joined #gluster
20:40 vbellur joined #gluster
20:40 vbellur joined #gluster
20:41 vbellur joined #gluster
20:41 vbellur joined #gluster
20:42 vbellur joined #gluster
20:43 vbellur joined #gluster
20:56 absolutejam joined #gluster
21:00 arif-ali joined #gluster
21:01 vbellur joined #gluster
21:02 vbellur joined #gluster
21:03 vbellur joined #gluster
21:03 vbellur joined #gluster
21:05 vbellur joined #gluster
21:06 vbellur joined #gluster
21:09 vbellur joined #gluster
21:13 farhorizon joined #gluster
21:13 vbellur joined #gluster
21:14 vbellur joined #gluster
21:14 major you want the clones on alternate nodes?
21:14 * major thinks.
21:14 foster joined #gluster
21:30 major totally wondering if there is some fun way to do the btrfs/zfs send/recv integration with geo-replication
21:39 MrAbaddon joined #gluster
21:45 msvbhat joined #gluster
22:03 Tanner_ major, well, our concern is that if we do the clone on the same node that we mount our production volume from we, and QA starts hammering it with reads doing automated testing we may affect performance on prod
22:04 major sounds fair
22:04 Tanner_ also, we like the idea of doing it on a geo-replicated node because we also get a live backup sort of for free
22:06 Tanner_ and regarding btrfs send/recv for geo-replication, I would say totally do-able
22:06 Tanner_ and likely quite efficient
22:07 major yah .. gonna at least make a note to investigate it
22:07 Tanner_ cool :) do you work at Redhat?
22:07 major nope
22:07 major just randomly decided to start working on the code
22:09 Tanner_ ah, well, your help and work is appreciated. We are just switching over from NFS to gluster and are liking it so far
22:10 farhorizon joined #gluster
22:10 major yah .. about the same for me .. just .. I wanted to use btrfs and found out later snapshots didn't work .. so .. I went to fix it
22:10 major some evenings I think I need a bigger boat
22:12 Tanner_ ah, yeah I would prefer btrfs over LVM since we use btrfs heavily in other places so am more experienced with it, but we also need heketi for volume provisioning in our kubernetes cluster
22:12 Tanner_ kind of a bad comparison..
22:19 nathwill joined #gluster
22:46 baber joined #gluster
22:46 msvbhat joined #gluster
22:48 major joined #gluster
23:06 major joined #gluster
23:08 major joined #gluster
23:09 major joined #gluster
23:11 major_ joined #gluster
23:16 major_ joined #gluster
23:21 major_ joined #gluster
23:22 major_ joined #gluster
23:23 major_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary