Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 masuberu joined #gluster
00:37 jdossey joined #gluster
00:43 ankitr joined #gluster
01:06 Philambdo joined #gluster
01:20 shdeng joined #gluster
01:45 pjrebollo joined #gluster
01:52 baber joined #gluster
01:53 masber joined #gluster
02:17 ankitr joined #gluster
02:20 shdeng joined #gluster
02:22 shdeng joined #gluster
02:22 plarsen joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:58 derjohn_mob joined #gluster
03:07 masuberu joined #gluster
03:20 ankitr joined #gluster
03:20 kramdoss_ joined #gluster
03:29 Shu6h3ndu joined #gluster
03:44 sanoj joined #gluster
03:49 mb_ joined #gluster
03:53 itisravi joined #gluster
03:54 kramdoss_ joined #gluster
03:56 magrawal joined #gluster
03:57 prasanth joined #gluster
04:02 Shu6h3ndu joined #gluster
04:06 masber joined #gluster
04:10 masber joined #gluster
04:16 gyadav joined #gluster
04:18 rejy joined #gluster
04:23 RameshN joined #gluster
04:23 kramdoss_ joined #gluster
04:29 atinm joined #gluster
04:29 skumar joined #gluster
04:36 ankitr joined #gluster
04:36 nbalacha joined #gluster
04:47 fcoelho1 joined #gluster
04:48 pjrebollo joined #gluster
04:49 kdhananjay joined #gluster
04:51 karthik_us joined #gluster
05:04 nbalacha joined #gluster
05:05 BlackoutWNCT joined #gluster
05:06 BlackoutWNCT Hey guys, quick question about Gluster 3.10. Is it backwards compatible with the 3.8 client?
05:06 BlackoutWNCT As in, can a 3.10 client mount a 3.8 mount point?
05:08 BitByteNybble110 joined #gluster
05:08 BatS9 joined #gluster
05:10 ndarshan joined #gluster
05:14 buvanesh_kumar joined #gluster
05:15 Prasad joined #gluster
05:22 mb_ joined #gluster
05:22 skumar_ joined #gluster
05:33 rafi joined #gluster
05:41 skoduri joined #gluster
05:42 apandey joined #gluster
05:43 riyas joined #gluster
05:48 skoduri_ joined #gluster
05:53 sona joined #gluster
06:00 apandey_ joined #gluster
06:00 rastar joined #gluster
06:17 Philambdo joined #gluster
06:19 sanoj joined #gluster
06:20 hgowtham joined #gluster
06:22 susant joined #gluster
06:22 Humble joined #gluster
06:27 Karan joined #gluster
06:28 ankitr joined #gluster
06:30 rafi1 joined #gluster
06:30 ankitr_ joined #gluster
06:33 nbalacha joined #gluster
06:35 sbulage joined #gluster
06:38 nthomas joined #gluster
06:41 susant joined #gluster
06:50 [diablo] joined #gluster
06:57 ahino joined #gluster
07:10 aravindavk joined #gluster
07:25 msvbhat joined #gluster
07:28 ankitr joined #gluster
07:43 d0nn1e joined #gluster
07:45 kdhananjay joined #gluster
07:48 rafi1 joined #gluster
07:56 nbalacha joined #gluster
08:03 mhulsman joined #gluster
08:04 mhulsman1 joined #gluster
08:07 skumar_ joined #gluster
08:07 arpu joined #gluster
08:09 mbukatov joined #gluster
08:09 skumar_ joined #gluster
08:13 skumar__ joined #gluster
08:23 apandey joined #gluster
08:35 hybrid512 joined #gluster
08:46 mhulsman joined #gluster
08:53 ivan_rossi joined #gluster
08:59 fsimonce joined #gluster
09:02 ashiq joined #gluster
09:13 susant joined #gluster
09:16 rastar joined #gluster
09:18 pjrebollo joined #gluster
09:22 Karan joined #gluster
09:23 Karan joined #gluster
09:26 flying joined #gluster
09:26 derjohn_mob joined #gluster
09:31 Seth_Karlo joined #gluster
09:39 derjohn_mob joined #gluster
09:55 Vaelatern joined #gluster
10:02 jkroon joined #gluster
10:20 skumar_ joined #gluster
10:20 nbalacha joined #gluster
10:23 ShwethaHP joined #gluster
10:29 panina joined #gluster
10:30 panina Morning. I've had a switch failure on my glusterfs backend network, and server quorum has been lost. Now the gluster volumes won't start. I can't seem to find proper documentation on how to resolve this.
10:32 panina When I run gluster volume status, the request times out. The systemd logs say that 'Server quorum lost for volume asdf. Stopping local bricks.' on all nodes.
10:33 msvbhat joined #gluster
10:33 cloph what is gluster peer status view of things?
10:34 panina all peers are connected.
10:35 panina But since I had it configured to a minimum 2 node quorum, that was lost, and now they can't decide who's boss. I guess.
10:36 cloph that might cause the volume to be in split-brain, but should't cause volume commands to time out
10:36 cloph did you try a volume stop followed by a start, or a start force?
10:41 panina Trying stopping them now
10:43 panina time out on that as well.
10:43 panina All hosts can ping each other btw.
10:46 panina Switch issue I think
10:47 panina The replacement switch seemed to be bad, when I replaced that with a third switch, I could stop some volumes
10:47 panina I
10:47 panina will see if it resolves all issues.
10:50 skumar joined #gluster
10:50 javi404 joined #gluster
10:56 arpu joined #gluster
10:57 panina False win. Only one of 4 volumes could be stopped. The most important one still times out.
11:05 panina Likely the switch is the issue anyway. The first replacement switch was so bleeding old it didn't have jumbo fram support.
11:05 Can Imnt expert. but cannot you see any clue in log files?
11:09 panina It's resolved now. I replaced it with a known good switch, and restarted glusterd on all hosts. Now glusterfs behaves as expected again.
11:10 panina This place and its legacy hardware... grmbl.
11:10 kramdoss_ joined #gluster
11:10 panina Lack of jumbo-frames on the switch was what caused it. Thanks for the help though.
11:10 panina Helped to rule out false leads.
11:15 panina joined #gluster
11:18 atinm joined #gluster
11:21 sbulage joined #gluster
11:22 cloph thanks for sharing the cause/solution and not just quitting with "solved" :-)
11:31 pjreboll_ joined #gluster
11:41 sbulage joined #gluster
11:43 vbellur joined #gluster
11:49 derjohn_mob joined #gluster
11:53 msvbhat joined #gluster
11:59 susant joined #gluster
12:17 [diablo] joined #gluster
12:23 atinm joined #gluster
12:24 pulli joined #gluster
12:28 panina Anyone have any tips on how to recover split-brains?
12:31 rwheeler joined #gluster
12:32 itisravi panina: https://gluster.readthedocs.io/en/latest/Troubleshooting/heal-info-and-split-brain-resolution/
12:32 glusterbot Title: Split Brain (Auto) - Gluster Docs (at gluster.readthedocs.io)
12:40 [fre] joined #gluster
12:46 [diablo] joined #gluster
13:10 panina itisravi thank you. I overlooked that part of the docs.
13:11 itisravi panina: no prob :)
13:13 nthomas joined #gluster
13:15 xrated left #gluster
13:15 ira joined #gluster
13:31 unclemarc joined #gluster
13:49 riyas joined #gluster
13:55 vbellur1 joined #gluster
13:57 vbellur joined #gluster
13:58 vbellur1 joined #gluster
13:58 buvanesh_kumar joined #gluster
13:59 vbellur joined #gluster
13:59 rastar joined #gluster
13:59 vbellur1 joined #gluster
14:00 ic0n joined #gluster
14:00 vbellur joined #gluster
14:01 vbellur1 joined #gluster
14:05 Seth_Karlo joined #gluster
14:12 decayofmind joined #gluster
14:14 decayofmind Hi! I'm planning an upgrade from 3.7 to 3.8. As I noticed in the documentation it may be without downtime. Is it true or there's something I should be aware to?
14:14 Seth_Karlo joined #gluster
14:15 plarsen joined #gluster
14:15 shyam joined #gluster
14:21 nbalacha joined #gluster
14:26 cholcombe joined #gluster
14:30 icey joined #gluster
14:31 skylar joined #gluster
14:39 ankitr joined #gluster
14:41 kpease joined #gluster
14:44 major does anyone actively use gluster on btrfs?
14:45 ivan_rossi major: IIRC, joeJulian does
14:53 major looking at setting up a 3 node gluster on dual 10G Ethernet here at the house and sort of curious as to things like XFS+LVM or just straight btrfs
14:55 susant left #gluster
14:58 major haven't been able to locate any pros/cons information comparing the two in regards to gluster
15:06 ivan_rossi one con I know: you cannot do gluster volume snapshots without LVM ATM
15:07 ivan_rossi one pro for btrfs could be compression
15:08 ivan_rossi Joe can tell you more when/if he will be aroun
15:08 major thanks
15:12 rastar joined #gluster
15:12 squizzi joined #gluster
15:17 flying joined #gluster
15:19 Akram joined #gluster
15:19 Akram hi guys, is there a way to change the default owner-uid for all volumes to be created ?
15:27 cloph major: why is there no "straight xfs" as option?
15:35 farhorizon joined #gluster
15:41 baber joined #gluster
15:41 major cloph, lack of snapshots and similar.  I wasn't aware that gluster didn't handle btr snapshots, seems sorta curious
15:42 Karan joined #gluster
15:47 cholcombe Do you have to wait a small amount of time after starting a volume before it's mountable?  I have a script that starts a gluster vol and then immediately tries to mount it.  It usually works but this time it didn't
15:51 ivan_rossi major: to make consistent gluster volume snapshots you need to take simultaneous snapshots on all the peers that provide storage to a volume, and the technology to do that ATM depends on LVM. I think devs are working to support ZFS too, btrfs i don't know.
15:54 major curious .. would think it could be burried behind a simple API so that it could easily be extended .. wonder what the core concerns are there
15:55 major I suppose part of it is that btr tends to snapshot out to a path
15:55 major well .. "does" snapshot out to a path
16:00 ankitr joined #gluster
16:04 moneylotion joined #gluster
16:06 wushudoin joined #gluster
16:07 Akram joined #gluster
16:07 Akram ivan_rossi: do you have an idea about ^^^ ?
16:08 jobewan joined #gluster
16:11 ivan_rossi nope.  anyone of the devs like to shed some light on the subject?
16:24 derjohn_mob joined #gluster
16:27 arpu joined #gluster
16:27 Gambit15 joined #gluster
16:41 ivan_rossi left #gluster
16:47 scc joined #gluster
16:47 jdossey joined #gluster
16:49 Kins joined #gluster
16:53 ankitr joined #gluster
16:57 major wow .. this code is ultra-entrenched in lvm
16:58 major wonder how well the xfs support is going, don't see it in the git tree
17:02 major actually .. this isn't too bad in the end .. at least not from the glusterd's view
17:02 major just heavy on the repeated testing of the underlying fstype
17:03 rastar joined #gluster
17:07 major seems a lot of heavy C work to just wrap cli commands though
17:31 farhorizon joined #gluster
17:32 JoeJulian I think the work they're doing to support zfs is expected to be generic enough to support btrfs as well, at least that's what I heard.
17:33 JoeJulian I like btrfs for compression and for ssds.
17:41 unclemarc joined #gluster
17:42 ankitr joined #gluster
17:54 buvanesh_kumar joined #gluster
17:55 ahino joined #gluster
17:57 vbellur joined #gluster
18:04 major JoeJulian, yah, I dunno .. looking at the code atm and trying to wrap my head around what all needs to be written to support stuff like glustered_get_brick_btrfs_details()
18:05 major looks like 4 functions or so in this one file
18:05 major not a big deal really
18:07 JoeJulian Huh, I don't see a bugzilla entry for that.
18:08 [diablo] joined #gluster
18:09 major just looking at glusterd/src/ and looking at anything that calls LVM to figure out what would need to be filled in for btr
18:10 * JoeJulian goes to file a bug
18:10 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
18:10 major glustered_get_brick_btrfs_details() is a bit of a trick in that it would likely need the quota enabled in order to fetch information such as volume size
18:10 major even if the quota isn't used
18:10 major I dunno of any other way to find out how much space a volume uses directly
18:10 major or at least a snapshot
18:17 major also .. some of these messages are a bit .. misleading .. like the idea that updating the UUID on btrfs is unsupported after snapping a volume .. btr does that automagically.. and even if the code wanted to change the label for the fs as a whole .. there is a command for that..
18:17 major so .. yah .. just looking still .. trying to understand more than anything
18:18 ahino joined #gluster
18:20 JoeJulian +1
18:20 JoeJulian @hack
18:20 glusterbot JoeJulian: The Development Work Flow is at http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow
18:20 JoeJulian In case you're interested.
18:21 major bot needs updated ;)
18:21 major but thanks
18:23 major heh, already have a topic branch pulled and tweaking this a bit
18:23 JoeJulian @forget hack
18:23 glusterbot JoeJulian: The operation succeeded.
18:24 JoeJulian @learn hack as The Simplified Development Workflow is at https://gluster.readthedocs.io/en/latest/Developer-guide/Simplified-Development-Workflow/
18:24 glusterbot JoeJulian: The operation succeeded.
18:25 JoeJulian The rfe is bug 1426749
18:25 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1426749 unspecified, unspecified, ---, bugs, NEW , [RFE] Support snapshots in btrfs and zfs
18:30 Seth_Kar_ joined #gluster
18:34 major heh
18:40 farhorizon joined #gluster
18:44 shyam joined #gluster
18:46 jeffspeff joined #gluster
18:57 intense joined #gluster
19:00 intense i am trying to integrate a SSO solution (FreeIPA) with a data store using GlusterFS. However, glusterfs only allows nfs mounting using nfsv3, which does not support kerberos authentication. does glusterfs have any authentication methods for controlling client mounting?
19:00 kpease joined #gluster
19:01 masuberu joined #gluster
19:02 kpease_ joined #gluster
19:08 mhulsman joined #gluster
19:09 snehring gluster supports nfs4 through ganesha
19:09 snehring cifs is also an option...
19:09 snehring intense ^^
19:13 pjrebollo joined #gluster
19:17 shyam joined #gluster
19:21 major okay .. gonna see if I can't break some stuff by filling in these 4 functions.. will need to hunt down what else is wontonly calling lvm commands and identify those as well
19:21 JoeJulian major++
19:21 glusterbot JoeJulian: major's karma is now 1
19:21 major regardless, looks like need to set a default snapshot location and enable the quota to fill in the 1st 4
19:22 major unless someone else has some trick I am unaware of
19:22 major gonna need to add docs for this too .. la sigh
19:23 intense snehring: thanks i will look into that
19:23 major but hey .. 4 brand new nodes w/ no data on them at all .. isn't like I am gonna hurt anything :)
19:23 JoeJulian That'd be more of a dev question. #gluster-dev maybe can help. The gluster-devel mailing list has the greatest number of developer eyeballs.
19:24 JoeJulian If you make this happen, major, I'd be very happy. I have several places I use btrfs that I would like snapshots.
19:24 major JoeJulian, is mostly a btrfs thing .. the current size of a volume doesn't seem to be recorded unless quota is turned on .. well .. I certainly never found a way to report it .. so when reporting the size of a snapshot .. well .. yah
19:25 major But .. I figure .. lets write up these 4 interfaces and see what else breaks right? :)
19:25 JoeJulian ioctl(3, BTRFS_IOC_SPACE_INFO, ?
19:26 major I dunno, was mostly thinking from the CLI
19:26 JoeJulian based on an strace of btrfs fi df
19:26 JoeJulian or btrfs fi df. :)
19:26 major a ton of glusterd/src/ is all C wrapers to LVM shell commands
19:26 major yah, that gives the filesystem
19:26 major not a per snapshot report
19:27 JoeJulian I swear I did that once upon a time...
19:27 major and gluster "appears" to want it per snapshot .. I dunno why
19:27 major it isn't hard to get .. just have to have quotas enabled
19:27 major there may be a library way to get it though .. might poke at that option as well
19:28 major but first .. fooood
19:29 buvanesh_kumar joined #gluster
19:38 Kins joined #gluster
19:51 oajs joined #gluster
19:53 squizzi joined #gluster
19:59 cholcombe joined #gluster
20:01 Kins joined #gluster
20:13 lkoranda joined #gluster
20:14 vbellur joined #gluster
20:18 major okay .. found a bunc of other code that needs filled in .. weeee
20:20 csaba joined #gluster
20:26 lkoranda joined #gluster
20:32 ahino joined #gluster
20:46 major la sigh .. copy/paste code...
20:46 mlg9000 joined #gluster
20:51 farhorizon joined #gluster
20:52 pasik joined #gluster
20:53 oajs joined #gluster
20:53 scc joined #gluster
20:53 Kins joined #gluster
21:00 vbellur joined #gluster
21:02 Jules-_ joined #gluster
21:04 skylar joined #gluster
21:05 farhorizon joined #gluster
21:06 farhoriz_ joined #gluster
21:06 JoeJulian Yeah, copy/paste is the bane of gluster source.
21:11 misc +1
21:17 mhulsman joined #gluster
21:25 Acinonyx joined #gluster
21:28 farhorizon joined #gluster
21:32 major I can tell
21:34 major hurm...
21:36 major wonder if I need to open a bug about gutting the copy-paste of glusterd_take_lvm_snapshot() and the code around it and stuffing it into a glusterd_take_snapshot() that handles the btr condition
21:37 major or just submit them as part of the same thing
21:37 major bleh
21:38 snehring one for a snapshot interface, another for an interface implementation?
21:41 major I was more thinking of just the fact that both instances of this are followed by copy/paste of the same code in 2 different locations... can make one wrapper to hold all the copy/paste code, have 1 instance of the work, and then do a btrfs check in there to decide between lvm and btr
21:41 JoeJulian +1
21:42 major likely easier to just do it and see if someone has any constructive criticisms/opinions/complaints/ideas/etc
21:42 major sides .. its friday
21:42 JoeJulian Right!
21:43 JoeJulian Hey, major, what do you do for a living down there?
21:43 JoeJulian If I may ask.
21:43 major usually whatever they put into my contract :)
21:45 major but .. on days like this it is more fun to do things to take my mind off of work
21:45 JoeJulian Heh, I was more wondering who a "they" is. I've been through Toledo once or twice. Not a place I would have expected to find someone who can work on this stuff. :)
21:45 major ahh
21:46 Acinonyx joined #gluster
21:46 major yah .. I moved here from Austin, TX, and I mostly work out of Seattle
21:46 JoeJulian I'm in Seattle, too.
21:47 major uh oh .. now we will have to go to the Lodge and get burgers
21:47 JoeJulian Now that we have two gluster users, we can do a meetup. Maybe we could even get amye to get us dinner. :)
21:48 JoeJulian Unless her wall of espresso cups has fallen and crushed her.
21:49 major heh
21:50 JoeJulian https://twitter.com/amye/status/835196544307515392
21:50 major sort of tickled really .. putting together a little 4-node environment, each node has dual 10G, gonna hang it off my 1G fiber :)
21:50 major cause .. thats why I rent in Toledo
21:51 JoeJulian Aha
21:51 major and the traffic doesn't suck
21:51 JoeJulian Well, that all depends on your commute.
21:51 major Amtrak has an open bar
21:51 major or open enough
21:51 JoeJulian hehe
21:52 JoeJulian I keep telling Sound Transit they need to add that to the Sounder.
21:57 major for serious
21:59 joshin joined #gluster
22:02 k0nsl joined #gluster
22:02 k0nsl joined #gluster
22:05 Acinonyx joined #gluster
22:10 BitByteNybble110 Have an unusual problem on a newly rolled 3.9.1 cluster.  When configured with no volumes the glusterd service will start on a system boot.  When enabling gluster shared storage, the glusterd service will fail to start on boot but can be manually started and gluster shared storage mounted with a mount -a
22:11 BitByteNybble110 This is the contents of the glusterd.log file - https://paste.fedoraproject.org/paste/1lVEItQ8f~MGmxICpw1Mxl5M1UNdIGYhyRLivL9gydE=
22:11 k0nsl joined #gluster
22:11 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
22:11 k0nsl joined #gluster
22:11 Acinonyx joined #gluster
22:19 BitByteNybble110 I'm guessing that based on the log glusterd is trying to start before a service it depends on is ready.  I've compared this to our 3.8.8 cluster, but the systemd .service files are identical.
22:20 BitByteNybble110 The only difference between the two clusters are one cluster is running on CentOS 7 and the other on Fedora 25
22:27 buvanesh_kumar joined #gluster
22:27 amye ooh, a Seattle meetup?!
22:27 amye How awesome!
22:39 d0nn1e joined #gluster
22:47 fang64 joined #gluster
22:51 derjohn_mob joined #gluster
23:00 JoeJulian BitByteNybble110: Looks like they network isn't ready "Network is unreachable".
23:01 JoeJulian BitByteNybble110: Looks like the network isn't ready "Network is unreachable".
23:06 f0rpaxe joined #gluster
23:22 Jacob843 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary