Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:20 msvbhat joined #gluster
00:21 vbellur joined #gluster
01:05 Gambit15 JoeJulian> You're not using dedup, are you? That was exactly the problem I was having the one time I tried zfs, so maybe I'm the wrong person to ask.
01:08 Gambit15 FWIW, dedup is usually recommended against in all but the most specific of edge cases. Stick to the LZ compression enabled by default
01:08 Gambit15 dedup requires a shit-ton of RAM for very little gain.
01:13 shdeng joined #gluster
01:35 jdossey joined #gluster
02:07 Gambit15 joined #gluster
02:18 jdossey joined #gluster
02:27 derjohn_mob joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:26 Shu6h3ndu joined #gluster
03:28 Shu6h3ndu joined #gluster
03:29 mb_ joined #gluster
03:32 deepbook5broo joined #gluster
03:32 deepbook5broo left #gluster
03:37 atinm joined #gluster
03:39 nbalacha joined #gluster
03:42 magrawal joined #gluster
03:44 phileas joined #gluster
03:47 kramdoss_ joined #gluster
03:56 daMaestro joined #gluster
03:57 gyadav joined #gluster
04:04 itisravi joined #gluster
04:04 RameshN joined #gluster
04:14 buvanesh_kumar joined #gluster
04:15 Humble joined #gluster
04:18 pioto joined #gluster
04:20 kdhananjay joined #gluster
04:22 ppai joined #gluster
04:30 siel joined #gluster
04:31 buvanesh_kumar joined #gluster
04:35 atinm joined #gluster
04:41 nbalacha joined #gluster
04:51 jiffin joined #gluster
04:55 Prasad joined #gluster
04:59 skumar joined #gluster
05:00 sanoj joined #gluster
05:00 kramdoss_ joined #gluster
05:05 sbulage joined #gluster
05:08 rafi joined #gluster
05:08 rjoseph joined #gluster
05:13 karthik_us joined #gluster
05:16 ndarshan joined #gluster
05:19 kramdoss_ joined #gluster
05:20 mb_ joined #gluster
05:21 apandey joined #gluster
05:26 sbulage joined #gluster
05:37 apandey joined #gluster
05:41 kramdoss_ joined #gluster
05:46 sbulage joined #gluster
05:57 Karan joined #gluster
06:04 Humble joined #gluster
06:06 Saravanakmr joined #gluster
06:20 kramdoss_ joined #gluster
06:40 Philambdo joined #gluster
06:43 sbulage joined #gluster
06:45 kramdoss_ joined #gluster
06:58 msvbhat joined #gluster
07:07 ShwethaHP joined #gluster
07:10 sbulage joined #gluster
07:17 susant joined #gluster
07:17 unlaudable joined #gluster
07:21 jtux joined #gluster
07:21 sbulage joined #gluster
07:22 ankitr joined #gluster
07:26 atinm joined #gluster
07:28 shruti` joined #gluster
07:30 ankitr_ joined #gluster
07:33 sbulage joined #gluster
07:37 kramdoss_ joined #gluster
07:39 ivan_rossi joined #gluster
07:41 msvbhat joined #gluster
07:53 sbulage joined #gluster
08:04 nbalacha joined #gluster
08:09 Umf joined #gluster
08:09 Umf Bonjour
08:09 Wizek_ joined #gluster
08:12 susant joined #gluster
08:15 shutupsquare Im getting no active sinks for performing self-heal on file <gfid>, I have checked getfattr on the file in question on both nodes, they both show the same, 1 data operation pending for itself and the other brick as seen by it. The file md5sum is exact, so I know the data is consistent, how do I recover from this
08:15 shutupsquare ?
08:26 [diablo] joined #gluster
08:29 shutupsquare Its a split-brain right? But running `gluster volume heal data info split-brain` reveals no results
08:29 shutupsquare All the bricks are showing online Y
08:31 atinm joined #gluster
08:32 mhulsman joined #gluster
08:35 pulli joined #gluster
08:35 mbukatov joined #gluster
08:38 pulli joined #gluster
08:40 sanoj joined #gluster
08:43 shutupsquare I think I fixed it, I chose a brick and ran `setfattr -n trusted.afr.webdata-client-0 -v 0x000000000000000000000000 <file>` then ran self heal again and it appears to have worked.
08:47 ton31337 joined #gluster
08:47 ton31337 hey, anyone is having like this? https://gist.githubusercontent.com/ton31337/8​bc7efed7f907c0ff3ac0a073f1223be/raw/e6007f10b​fc228cda1968047f12e35137ff3a830/gistfile1.txt
08:47 ton31337 invalid argument happens time to time
08:50 sanoj joined #gluster
08:54 apandey joined #gluster
09:05 fsimonce joined #gluster
09:14 ton31337 [2017-02-16 16:26:05.781245] E [MSGID: 115068] [server-rpc-fops.c:1440:server_readv_cbk] 0-backup-xxx-server: 666631: READV -2 (0ff1d260-5499-405e-8f7e-2134caffa4fd), client: uk-m247-web45.xxx.eu-111930-2017/01/27-2​2:56:14:292246-backup-xxx-client-10-2-0, error-xlator: backup-xxx-posix [Invalid argument]
09:14 ton31337 somehow to debug those actions?
09:15 poornima joined #gluster
09:22 ashiq joined #gluster
09:27 ton31337 left #gluster
09:33 pulli joined #gluster
09:39 pjrebollo joined #gluster
09:42 rafi2 joined #gluster
09:43 nbalacha joined #gluster
09:52 Seth_Karlo joined #gluster
09:56 Seth_Karlo joined #gluster
09:59 pulli joined #gluster
10:02 Seth_Karlo joined #gluster
10:03 derjohn_mob joined #gluster
10:07 percevalbot joined #gluster
10:19 Seth_Karlo joined #gluster
10:19 jkroon joined #gluster
10:35 atinm joined #gluster
10:35 shutupsquare joined #gluster
10:49 hybrid512 joined #gluster
10:51 rafi joined #gluster
10:54 k4n0 joined #gluster
11:13 msvbhat joined #gluster
11:16 jiffin joined #gluster
11:31 ashiq joined #gluster
11:40 pjrebollo joined #gluster
11:54 pjrebollo joined #gluster
12:07 Saravanakmr joined #gluster
12:11 nh2 joined #gluster
12:11 fcoelho joined #gluster
12:14 k4n0 joined #gluster
12:34 jwd joined #gluster
13:17 susant left #gluster
13:20 unclemarc joined #gluster
13:21 shyam joined #gluster
13:22 derjohn_mob joined #gluster
13:29 Jacob843 joined #gluster
13:32 ira joined #gluster
13:36 k4n0 joined #gluster
13:46 shyam joined #gluster
13:48 vbellur left #gluster
13:52 baber joined #gluster
14:06 rwheeler joined #gluster
14:18 alvinstarr1 joined #gluster
14:24 mrEriksson joined #gluster
14:26 jkroon joined #gluster
14:28 vbellur joined #gluster
14:35 gyadav joined #gluster
14:36 msvbhat joined #gluster
14:36 shyam joined #gluster
14:36 skylar joined #gluster
14:39 TvL2386 joined #gluster
14:55 Seth_Karlo joined #gluster
14:58 ic0n joined #gluster
15:04 gyadav joined #gluster
15:05 kpease joined #gluster
15:10 pjreboll_ joined #gluster
15:10 pulli joined #gluster
15:15 nbalacha joined #gluster
15:15 oajs joined #gluster
15:19 baber joined #gluster
15:22 Seth_Karlo joined #gluster
15:27 Gambit15 joined #gluster
15:28 pulli joined #gluster
15:30 oajs joined #gluster
15:39 victori joined #gluster
15:41 squeakyneb joined #gluster
15:46 Gambit15 joined #gluster
15:49 pjrebollo joined #gluster
15:52 ashiq joined #gluster
15:53 pulli joined #gluster
15:57 jeffspeff joined #gluster
16:03 baber joined #gluster
16:06 wushudoin joined #gluster
16:06 wushudoin joined #gluster
16:27 gyadav joined #gluster
16:28 mhulsman joined #gluster
16:33 mhulsman joined #gluster
16:36 shutupsquare joined #gluster
16:44 jdossey joined #gluster
16:45 armyriad joined #gluster
16:45 Wizek__ joined #gluster
16:50 sanoj joined #gluster
17:02 Seth_Karlo joined #gluster
17:11 JoeJulian shutupsquare: pending for itself is insane. resetting the attribute was, I believe, the best solution. kudos.
17:14 derjohn_mob joined #gluster
17:27 mhulsman joined #gluster
17:35 ivan_rossi left #gluster
17:38 baber joined #gluster
17:50 jeffspeff joined #gluster
17:52 daMaestro joined #gluster
17:57 mhulsman joined #gluster
18:01 Seth_Karlo joined #gluster
18:15 _KaszpiR_ joined #gluster
18:16 _KaszpiR_ derp
18:16 _KaszpiR_ weird question, i know glusterfs supports snapshots with lvm, but anyone thought about zfs (or am I triggering 'wtf dude' question)?
18:17 JoeJulian It's frequently discussed by the devs.
18:17 JoeJulian That and btrfs snapshots.
18:18 JoeJulian I'm sure it will happen, just not sure where that is in the development cycle.
18:20 _KaszpiR_ oh really, thanks for the info
18:26 snehring that's actually on my list of things to do to look into that and possibly making it a reality
18:26 snehring (I have yet to even start in any way)
18:27 JoeJulian Go cyclones!
18:27 snehring lol
18:27 JoeJulian You should find some grant to apply for and do it that way. There's got to be something that could get that paid for.
18:29 snehring probably
18:31 vbellur joined #gluster
18:32 _KaszpiR_ quick glance on the net I see there are works to make snapshots more abstract - looks like its very tightly couple dwith lvm now - so that it could be more easily odified per fs underneath
18:33 _KaszpiR_ actually it's more advanced than I expected ;)
18:44 mhulsman joined #gluster
18:47 shyam joined #gluster
18:52 baber joined #gluster
18:52 mhulsman joined #gluster
19:03 pjrebollo joined #gluster
19:07 nh2 ndevos: ping
19:07 glusterbot nh2: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
19:08 nh2 ndevos: for https://bugzilla.redhat.co​m/show_bug.cgi?id=1422074, do you already have a patch that would implement the nanosecond timestamps? Because I'd like to try whether it works for me and how much faster it makes rsync
19:08 glusterbot Bug 1422074: unspecified, unspecified, ---, bugs, NEW , GlusterFS truncates nanoseconds to microseconds when setting mtime
19:11 JoeJulian nh2: https://review.gluster.org/#/q/status:o​pen+project:glusterfs+topic:bug-1422074 is where you'll find a patch if it is in review.
19:11 glusterbot Title: Gerrit Code Review (at review.gluster.org)
19:14 cliluw joined #gluster
19:31 unlaudable joined #gluster
19:37 dfs joined #gluster
19:37 Guest57063 hi all, new to gluster and was reading through the docs. I have a basic question that i haven't really found an answer to. What's the distinction between pool and a set of peers? Can you be one while not being in the other?
19:40 shyam joined #gluster
19:43 JoeJulian @glossary
19:44 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
19:44 JoeJulian When you peer a set of servers, you create a "trusted pool"
19:45 JoeJulian Those peers trust each other, but don't trust any other server, so, for instance, you cannot probe an existing pool from a new server, you must probe the new server from within the trusted pool.
19:46 JoeJulian Guest57063: Does that answer your question?
19:47 Guest57063 that makes sense but i want to understand one there are two separate concepts
19:48 Guest57063 why do you have a pool and a peer
19:48 Guest57063 why the distinctiion?
19:48 Guest57063 how could i for example, have a node in a pool and not be a peer? It seems like they are synononmous
19:50 mhulsman joined #gluster
19:51 moss left #gluster
19:54 JoeJulian peers are servers within  the pool.
19:54 Guest57063 i understand that
19:54 Guest57063 let me rephrase
19:55 JoeJulian So peers and a pool are basically synonymous, yes.
19:55 Guest57063 ahh
19:55 Guest57063 right!!
19:55 Guest57063 but i see gluster pool list
19:55 Guest57063 and gluster peer status
19:55 Guest57063 why two commands for the same thing? could the statuses ever diverge?
19:55 JoeJulian peer status shows the status of that server's peers (excluding itself because it's not a peer of itself).
19:55 Guest57063 i am aksing because i building in monitoring and i want to know if actually have to check both in order to node if nodes are disconnected
19:56 Guest57063 i am
19:56 JoeJulian Rather than change that existing behavior, the pool list was added to give a full picture from a single server.
19:56 Guest57063 ahhhh
19:57 Guest57063 its a matter of an evolving api and not wanting to risk break existing setups that rely on "old" ways of doing things
19:57 JoeJulian yes
19:57 Guest57063 fair enough
19:58 Guest57063 on an unrelated question, if you don't mind, i was curious if knew anything about the cons of turning bitrot detection on
19:58 Guest57063 i know the pros
19:58 Guest57063 i am wondering why i would not want to have it on
19:58 Guest57063 if you new
19:58 Guest57063 i am assuming nothing is free :)
19:59 JoeJulian load, mostly. If you have unsharded volumes with big files, recalculations can be very expensive.
20:01 bchilds joined #gluster
20:02 bchilds when i FUSE mount a gluster volume onto the host then restart the gluster daemon the mount is lost when the daemon comes back up.. that is expected? is there a way to restore previously FUSE mounted volumes?
20:03 f0rpaxe joined #gluster
20:04 JoeJulian hmm, I thought the client would stick around...
20:04 JoeJulian Now you've got me curious.
20:04 Guest57063 @bchilds how do you have the mount configured /etc/fstab?
20:04 bchilds no, just mount -t glusterfs .....
20:04 bchilds its not in fstab
20:05 Guest57063 https://www.jamescoyle.net/how-​to/439-mount-a-glusterfs-volume
20:05 glusterbot Title: Mount a GlusterFS volume – JamesCoyle.net (at www.jamescoyle.net)
20:06 bchilds well, the truth is the use case is more complicated.. this is for glusterfs volumes in kubernetes applications so they are mounted on nodes when the application is scheduled there (and unmounted when removed)
20:07 Guest57063 the only way i found to make the native glusterfs mount work in reasonably fault tolerant manner is to define each gluster server in file so when the mount command is run it "knows" which servers it supposed to connect to
20:07 Guest57063 @bchilds actually i need to get that to work with openshift in the not too distant future
20:07 Guest57063 so i'll soon be doing something similar
20:08 Guest57063 are you doing pure kubernetes or are you using openshift?
20:08 bchilds gust57063 :) https://bugzilla.redhat.co​m/show_bug.cgi?id=1423640
20:08 glusterbot Bug 1423640: high, unspecified, ---, hchen, ASSIGNED , Restart of atomic-openshift-node service terminates pod glusterfs mount
20:08 daMaestro joined #gluster
20:08 bchilds containerized openshift to make matters worse
20:08 Guest57063 yep i am in the exact same boat as you
20:09 Guest57063 i am getting some monitoring in place of gluster but i'll be integrating that in into our openshift deployment in the next week or so
20:09 Guest57063 i have great fear ab out how buggy this giong to be
20:09 Guest57063 i am not worried about gluster to be clear
20:09 bchilds so even if gluster is serializing its current mount state someplace that it could re-connect on restart, it might be lost because of the ephemeral nature of continaers..
20:09 Guest57063 well here's the thing
20:09 JoeJulian You do know that it grossly inflates me ego when Red Hatters come here to ask me questions, right?
20:10 Guest57063 @joejulian that's hilarious
20:10 JoeJulian bchilds: Ah, that may be it.
20:10 Guest57063 the key abstraction layer is the docker volume plugin for gludster
20:10 Guest57063 i can't find that documented anywhere!!
20:10 JoeJulian If the host is changing, the client won't find it.
20:10 Guest57063 i've looked far and wide
20:10 bchilds openshift and kubernetes doesn't use the docker/gluster plugin
20:10 Guest57063 correcyt
20:10 Guest57063 they must use something else
20:11 Guest57063 but what is that thing?
20:11 bchilds its all in the PV framework for kube.. its parallel concepts
20:11 bchilds and relies on drivers on the host
20:11 bchilds or in a container in this case
20:11 Guest57063 yes that's what i am looking for documentation on
20:11 Guest57063 what that interface layer between gluster and openshift is, that's where you'll find the bug more likely than not
20:12 bchilds guest57063 : gluster on openshift https://docs.openshift.org/latest/​install_config/persistent_storage/​persistent_storage_glusterfs.html
20:12 vbellur bchilds: is it a bind mount within the container?
20:12 Guest57063 read that...still doesn't answer my question
20:12 Guest57063 what you are describing is a failure mode  that should be handled by their interface layer
20:12 bchilds vbellur : yes .. the gluster daemon and the mount are in a container
20:12 Guest57063 tfhe question is what that is....if we knew it could be hacked to do the right thing
20:12 Guest57063 and a PR submitted to fix the problem
20:12 JoeJulian Wait... the daemon is in a container? How?
20:13 Guest57063 the daemon probably isn't
20:13 Guest57063 unless they are insane
20:13 JoeJulian Oh, because it doesn't ever change. never mind.
20:13 JoeJulian The problem I had was adding a brick.
20:13 Guest57063 ohhh
20:13 Guest57063 you have enterprise don't you?
20:13 JoeJulian bchilds: is this related to, or have you looked at, heketi?
20:13 Guest57063 we are on openshift origin cause we are cheap
20:14 bchilds no this isn't heketi.. in full disclosure i work on the openshift/kube volume framework and trying to get a grip on the above bug
20:14 bchilds which isn't remounting things when the daemon restarts
20:14 bchilds but its just acting as a gluster client
20:14 bchilds not hyperconverged
20:15 vbellur bchilds: typically I have seen this workflow -- fuse mount happens on the container host and a bind mount of the mount point on the host happens within the container.
20:15 JoeJulian When the daemon restarts, does it still have the same ip address?
20:16 bchilds that...... is a good question.. let me go do some more investigating.
20:16 bchilds it should be the same IP address
20:16 bchilds and its just a client
20:16 vbellur in the above scenario, the host would need to have an entry in fstab for the mount point to be persistent across node reboots (unless kube does some magic)
20:16 JoeJulian @ports
20:16 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
20:16 JoeJulian no... where was that factoid...
20:16 JoeJulian @mount server
20:16 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
20:16 JoeJulian Ah, right... #2
20:17 JoeJulian That's what was nagging the back of my memory.
20:18 Seth_Karlo joined #gluster
20:18 kpease joined #gluster
20:30 baber joined #gluster
20:34 JoeJulian Meh, I can't test that. I can't fuse mount inside a container...
20:43 JoeJulian bchilds: with 3.9.1 I created a replicated volume on two servers (in containers). I mounted that volume on one of the hosts then shut down both containers.
20:43 JoeJulian bchilds: The client log reported, "All subvolumes are down. Going offline until atleast one of them comes back up." and it did so.
20:43 JoeJulian bchilds: I started the containers again and was able to access the volume.
20:44 JoeJulian ... through the client mount without remounting.
20:54 samppah has anyone used or tested "routing on the host" method with glusterfs?
20:55 JoeJulian samppah: Not sure what you mean...
20:55 ira joined #gluster
20:56 samppah JoeJulian: using bgp or ospf on hosts for network redundancy, https://docs.cumulusnetworks.com​/display/ROH/Routing+on+the+Host
20:56 glusterbot Title: Routing on the Host - Routing on the Host - Cumulus Networks (at docs.cumulusnetworks.com)
20:59 JoeJulian Ah, none of the cloud provider I've used have that support.
21:00 samppah wondering if it's worth trying with gluster or does it too much latency to networking
21:02 JoeJulian No, it should be a good thing. localized ospf should even reduce latency in an l3 network.
21:05 samppah even compared to situation where all nodes are on same subnet?
21:05 JoeJulian All on the same l2 network would normally be faster.
21:06 JoeJulian Unless the network provider is faking l2 with vxlan over l3.
21:07 samppah Ok, in this case I can atleast say my opinions about network design :)
21:13 bchilds joejulian : thanks, i'm poking aroudn to see whatsup on our side.. i suspect its dropping whatever keeps the mount state
21:14 JoeJulian bchilds: Check the client log for the fuse mount.
21:16 Acinonyx joined #gluster
21:26 gem joined #gluster
21:56 Vapez_ joined #gluster
22:02 cyberbootje1 hi all, anyone any experience with glusterfs on illumos ?
22:12 tallmocha joined #gluster
22:18 pjrebollo joined #gluster
22:24 bchilds joejulian : just an update on that issue earlier... the problem is in the FUSE daemon and not gluster :-/
22:24 bchilds joejulian : kill and restart FUSE and it breaks
22:29 mhulsman joined #gluster
22:31 bchilds does anyone here know FUSE devs or where to open bugs against FUSE?
22:42 shutupsquare joined #gluster
23:03 daMaestro|isBack joined #gluster
23:23 shutupsq_ joined #gluster
23:33 pjrebollo joined #gluster
23:53 jwd joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary