Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:31 Alghost_ joined #gluster
00:44 kovshenin joined #gluster
01:04 shdeng joined #gluster
01:14 kovshenin joined #gluster
01:33 Lee1092 joined #gluster
01:41 aj__ joined #gluster
02:05 d0nn1e joined #gluster
02:30 Alghost joined #gluster
02:50 kramdoss_ joined #gluster
02:55 riyas joined #gluster
03:03 Gambit15 joined #gluster
03:15 sanoj joined #gluster
03:17 prasanth joined #gluster
03:21 magrawal joined #gluster
03:41 alvinstarr joined #gluster
03:47 sanoj joined #gluster
03:47 jith_ joined #gluster
03:57 an_ joined #gluster
03:58 itisravi joined #gluster
04:04 Gnomethrower joined #gluster
04:05 Saravanakmr joined #gluster
04:07 atinm joined #gluster
04:21 karthik_ joined #gluster
04:25 shubhendu joined #gluster
04:26 riyas joined #gluster
04:33 kotreshhr joined #gluster
04:33 itisravi joined #gluster
04:37 aspandey joined #gluster
04:37 RameshN joined #gluster
04:38 raghuhg joined #gluster
04:41 ppai joined #gluster
04:41 Alghost joined #gluster
04:42 kshlm joined #gluster
04:45 Gnomethrower joined #gluster
04:55 shubhendu joined #gluster
04:56 kdhananjay joined #gluster
05:03 aravindavk joined #gluster
05:03 hgowtham joined #gluster
05:04 jiffin joined #gluster
05:04 ndarshan joined #gluster
05:10 prasanth joined #gluster
05:21 poornima joined #gluster
05:23 skoduri joined #gluster
05:28 gem joined #gluster
05:29 jiffin1 joined #gluster
05:32 an_ joined #gluster
05:41 nbalacha joined #gluster
05:41 Bhaskarakiran joined #gluster
05:47 fus joined #gluster
05:48 mhulsman joined #gluster
05:51 kovshenin joined #gluster
05:52 poornima joined #gluster
05:58 mhulsman joined #gluster
05:59 nathwill joined #gluster
06:03 Bhaskarakiran joined #gluster
06:09 rafi joined #gluster
06:13 Saravanakmr joined #gluster
06:13 karnan joined #gluster
06:14 KpuCko joined #gluster
06:18 skoduri joined #gluster
06:20 ashiq joined #gluster
06:24 kovshenin joined #gluster
06:26 [diablo] joined #gluster
06:30 aspandey joined #gluster
06:33 rastar joined #gluster
06:33 ndarshan joined #gluster
06:33 jtux joined #gluster
06:34 jkroon joined #gluster
06:36 kovshenin joined #gluster
06:39 an_ joined #gluster
06:40 kovsheni_ joined #gluster
06:43 ankitraj joined #gluster
06:43 an_ joined #gluster
06:45 an_ joined #gluster
06:53 msvbhat joined #gluster
06:54 prth joined #gluster
06:57 rastar joined #gluster
07:01 unforgiven512 joined #gluster
07:04 ramky joined #gluster
07:05 mhulsman joined #gluster
07:13 satya4ever joined #gluster
07:14 an_ joined #gluster
07:17 an_ joined #gluster
07:22 mhulsman joined #gluster
07:23 fsimonce joined #gluster
07:38 jri joined #gluster
07:41 poornima joined #gluster
07:45 ankitraj joined #gluster
07:49 aspandey joined #gluster
07:50 rastar joined #gluster
07:58 Gnomethrower joined #gluster
08:17 Arrfab just curious, where is the gluster 3.8 doc ? http://gluster.readthedocs.io/ seems to mention up to 3.7
08:17 glusterbot Title: Gluster Docs (at gluster.readthedocs.io)
08:19 devyani7 joined #gluster
08:27 an_ joined #gluster
08:32 Bhaskarakiran joined #gluster
08:32 kotreshhr left #gluster
08:33 ankitraj joined #gluster
08:42 jtux joined #gluster
08:46 Champi joined #gluster
08:47 inodb joined #gluster
08:57 k4n0 joined #gluster
08:59 mhulsman joined #gluster
09:09 harish joined #gluster
09:15 jiffin joined #gluster
09:23 jiffin joined #gluster
09:30 karthik_ joined #gluster
09:31 RameshN joined #gluster
09:49 prth joined #gluster
09:57 aj__ joined #gluster
09:59 hackman joined #gluster
10:04 kovshenin joined #gluster
10:08 jiffin joined #gluster
10:16 rastar joined #gluster
10:20 prth joined #gluster
10:23 ankitraj joined #gluster
10:25 Gnomethrower joined #gluster
10:28 shyam joined #gluster
10:32 rastar joined #gluster
10:36 ira joined #gluster
10:39 karthik_ joined #gluster
10:48 arcolife joined #gluster
10:49 msvbhat joined #gluster
10:55 Bhaskarakiran joined #gluster
11:03 kramdoss_ joined #gluster
11:04 overclk joined #gluster
11:08 bluenemo joined #gluster
11:09 Saravanakmr joined #gluster
11:09 skoduri joined #gluster
11:09 jkroon joined #gluster
11:09 emitor_uy joined #gluster
11:09 sandersr joined #gluster
11:09 ebbex_ joined #gluster
11:09 frakt joined #gluster
11:09 rjoseph joined #gluster
11:09 The_Ball joined #gluster
11:09 gvandeweyer joined #gluster
11:09 jesk joined #gluster
11:09 javi404 joined #gluster
11:09 Anarka joined #gluster
11:09 ItsMe`` joined #gluster
11:09 kevc joined #gluster
11:09 wistof joined #gluster
11:11 lkoranda joined #gluster
11:12 eMBee joined #gluster
11:13 an_ joined #gluster
11:17 karnan joined #gluster
11:23 Arrfab can someone also point me to good doc about perf tuning and gluster options ? really difficult to troubleshoot perf issues
11:51 burn joined #gluster
11:51 mhulsman joined #gluster
11:52 B21956 joined #gluster
12:00 unclemarc joined #gluster
12:17 [fre] Dear gluster....
12:18 [fre] could somebody redhat-related tell me how to disable the automatic sharing of glustervolumes over samba?
12:20 snehring joined #gluster
12:21 plarsen joined #gluster
12:25 sanoj joined #gluster
12:27 rastar [fre]: for a particular volume?
12:27 rastar [fre]: or all volumes?
12:28 [fre] all
12:28 [fre] I don't want it generated automatically. for none.
12:28 Arrfab [fre]: not sure to understand
12:29 rastar [fre]: gluster vol set <volname> user.smb off
12:29 rastar [fre]: you will have to execute that command for all volumes you have
12:29 rastar [fre]: also remember to execute it for any new volume you create
12:31 rastar Arrfab: have you looked at this? https://gluster.readthedocs.io/en/latest/Ad​ministrator%20Guide/Monitoring%20Workload/
12:31 glusterbot Title: Monitoring Workload - Gluster Docs (at gluster.readthedocs.io)
12:31 Arrfab rastar: don't you have to share it through samba ? so if you don't have samba, how can it be shared automatically ?
12:32 Arrfab rastar: yeah, read that but trying to understand how I can have ~170Mb/s write speed on each individual brick, but now ~10Mb/s write on the gluster vol
12:32 Arrfab and through IPoIB
12:32 rastar Arrfab: gluster volumes can be used over a native protocol by using FUSE mount. Or using NFS. Gluster can export volume using own nfs-server implementation or even NFS-GANESHA
12:34 rastar Arrfab: It depends on your network setup and the replica count of the volume.
12:34 Arrfab rastar: that I know, but I don't think it has smb/cifs built-in
12:34 rastar Arrfab: Oh, no it does not..
12:34 rastar Arrfab: I am guessing that [fre] is using Samba for exporting something else but does not want Gluster to use Samba.
12:34 Arrfab rastar: for my setup : distributed+replicated, 4 nodes (2 replica), 1 brick per server, on a striped lv (two underlying disks)
12:35 Arrfab and 10Gbit network (IPoIB)
12:35 [fre] rastar, thing is that we use glustervolumes over smb, nfs & fuse.
12:35 [fre] We don't want every volume created to be used by smb.
12:36 [fre] sadly, when you create new ones, suddenly they appear to be shared by gluster... all of the new ones... automatically.
12:37 creshal joined #gluster
12:37 Arrfab [fre]: are those referenced in your smb.conf file ? or are you sharing the top mount point and you mount new volumes below ?
12:37 rastar [fre]: yes, your observation is right. We automatically add share entry for volumes when they are started.
12:37 [fre] they get referenced 'automatically' in smb.conf.
12:38 [fre] when started?
12:38 [fre] ok.
12:38 rastar [fre]: if you remember to set user.smb to off AFTER creating a volume and BEFORE starting it, they will never get added
12:38 [fre] ok.
12:38 rastar [fre] just remember that it is a volume level setting, so every new volume would require this action.
12:38 [fre] can we set it as a default-parameter to 'enable' when we need it?
12:39 [fre] instead of having to disable it for every new volume?
12:39 rastar [fre] do you mean default for user.smb to be OFF?
12:39 [fre] idd
12:41 rastar [fre] we chose default as yes. Unfortunately current GlusterD does not support setting defaults from admin
12:41 rastar [fre] we are going to add that feature in next Glusterd version.
12:42 rastar Arrfab: for simple file creates expected write speed for your setup would be 170/2 Mb/s on gluster vol
12:44 Arrfab rastar: why that ?
12:45 rastar Arrfab: for a replica setup, data has to be transferred from client to all the replica bricks
12:45 Arrfab that I know
12:45 Arrfab but network isn't the issue, as we have 10Gb
12:45 rastar Arrfab: if the network bandwidth was 170 then it would half of that for replica 2
12:46 Arrfab rastar: no, I'd expect write speed to be almost what I can get at local brick RW speed
12:46 rastar Arrfab: have you tested over the network write speed ?
12:46 Arrfab as network isn't the bottleneck (tested with iperf between all nodes)
12:47 rastar Arrfab: ok
12:47 rastar Arrfab: you mention IPoIB
12:47 Arrfab yes
12:47 Arrfab my IB HBAs are 40Gb aware, but the IB switch itself is only 10Gb
12:47 rastar Arrfab: have you used the IPs assigned to IB devices for creation of Gluster volumes?
12:48 Arrfab rastar: hostnames
12:48 Arrfab rastar: which are resolved to the IPs on the IB subnet
12:48 rastar Arrfab: ok, that is right setup
12:49 Arrfab it's true that I migrated from ethernet 1GB to IB, but quite some time ago
12:50 rastar Arrfab: I am suspecting that ethernet is being used for data transfer
12:50 Arrfab and sometimes I see (from monitoring graphs) that I can peak the IB switch usage to ~5 Gb/s, but perfs at the gluster side are awful
12:50 Arrfab rastar: why would I see traffic on the IB cards ?
12:51 Arrfab here is I how I migrated if you're interested : https://arrfab.net/posts/2014/Nov/24/​switching-from-ethernet-to-infiniband​-for-gluster-access-or-why-we-had-to/
12:51 glusterbot Title: Switching from Ethernet to Infiniband for Gluster access (or why we had to ...) | Arrfab's Blog (at arrfab.net)
12:52 Arrfab maybe I'm missing something obvious, or there is now something faulty in the setup
12:52 * rastar checking
12:53 Arrfab rastar: basically only having two gluster volumes right now : one dedicated to a *bunch* of files (rsyncd target/module)
12:54 Arrfab and the other one is used for VMs, but not through fuse, but directly through ligfapi
12:54 Arrfab s/ligfapi/libgfapi/
12:54 glusterbot What Arrfab meant to say was: and the other one is used for VMs, but not through fuse, but directly through libgfapi
12:55 rastar Arrfab: and you see the problem with both the volumes?
12:56 Arrfab rastar: basically with the one hosting qcow2 images, but let me do a quick test
12:56 Arrfab rastar: both
12:57 rastar Arrfab: Ok, i asked because we fixed some gfapi bugs recently and wanted to eliminate them as root cause
12:57 [fre] rastar, can I ask what de difference is between user.cifs disable and user.smb off ?
12:58 Arrfab rastar: well, that gluster setup is still running 3.6.1, and I'll migrate to 3.8 next week, and also moving files around
12:58 rastar [fre]: both do the same work. user.smb deprecates user.cifs
12:58 rastar [fre] please use only one of them, I would recommend use only user.smb
12:58 [fre] user.cifs is to be replaced, .... ok. tnx. exactly my question.
12:59 rastar Arrfab: I have run out of ideas.
12:59 [fre] no difference between off and disable neither?
13:00 rastar Arrfab: it would help if you could perform the test with volume profile on and file a bug
13:00 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
13:00 Arrfab rastar: do you know where I can find all the possible options for gluster vol, with explanations (documented) ?
13:00 kdhananjay1 joined #gluster
13:00 rastar Arrfab: table in page https://gluster.readthedocs.io/en/latest/​Administrator%20Guide/Managing%20Volumes/
13:00 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.io)
13:01 rastar [fre]: no difference
13:02 Arrfab rastar: thanks .. was searching for explanations around performance.cache* (as it seems written by the developers, so knowing what they're talking about already) :-)
13:04 rastar Arrfab: :)
13:10 Arrfab rastar: I see some people (on the mailing list archive) mentioning direct-io-mode for mount/fuse option : for which reason would it have to be enabled/disabled ?
13:11 rastar Arrfab: If you would prefer files not to be cached at all, you should use that option.
13:12 rastar Arrfab: by cached I mean write-behind cache either in Gluster or FUSE kernel layer
13:13 rastar Arrfab: that is usually required only when you want data to be written to disk immediately, and it will surely hamper performance
13:14 rwheeler joined #gluster
13:21 atinm joined #gluster
13:27 mhulsman joined #gluster
13:37 glusterbot joined #gluster
13:42 kpease joined #gluster
13:48 msvbhat joined #gluster
14:05 shaunm joined #gluster
14:08 skylar joined #gluster
14:09 msvbhat joined #gluster
14:23 prth joined #gluster
14:26 msvbhat joined #gluster
14:29 an_ joined #gluster
14:30 nohitall join #ejabberd
14:37 hchiramm joined #gluster
14:37 dgossage joined #gluster
14:38 dgossage you available kdhananjay1?
14:39 kdhananjay1 dgossage: hi, yes
14:39 kdhananjay1 dgossage: i just got a response from the maintainer.
14:39 kdhananjay1 dgossage: we're from different TZs, thats why the delay
14:39 dgossage thats not a problem
14:39 harish joined #gluster
14:39 kdhananjay1 dgossage: he's given one additional step to be executed
14:40 dgossage ok
14:40 kdhananjay1 dgossage: i will be testing that out in a few min from now
14:40 kdhananjay1 dgossage: then i'd like you to try those steps on your test cluster.
14:40 dgossage can do
14:40 kdhananjay1 dgossage: if it works well, we can execute the steps on prod cluster too
14:41 rastar joined #gluster
14:42 nbalacha joined #gluster
14:48 k4n0 joined #gluster
14:49 jiffin joined #gluster
14:56 robb_nl joined #gluster
15:05 jri joined #gluster
15:09 ic0n joined #gluster
15:10 wushudoin joined #gluster
15:16 wushudoin joined #gluster
15:22 kdhananjay1 dgossage: ping
15:22 glusterbot kdhananjay1: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
15:22 dgossage i am back
15:22 kdhananjay1 dgossage: ok
15:23 kdhananjay1 dgossage: everything remains the same except for one additional step
15:23 kdhananjay1 dgossage: let me start from scratch:
15:23 kdhananjay1 dgossage: kill the brick process
15:23 kdhananjay1 dgossage: rm -rf the brick dir
15:23 kdhananjay1 dgossage: mkdir the brick dircetory
15:24 kdhananjay1 dgossage: setfattr -n trusted.afr.dirty -v 0x000000000000000000000001 <empty-brick-dir>
15:24 kdhananjay1 dgossage: note that the above is to be executed from the backend, not through the mount point.
15:25 kdhananjay1 dgossage: create the tmp mount using this - `glusterfs --volfile-id=$VOLNAME --volfile-server=$SERVER_HOSTNAME --client-pid=-6 $TMP_DIR`
15:26 kdhananjay1 dgossage: setfattr -n trusted.replace-brick -v $VOLNAME-client-$BRICK_INDEX $TMP_DIR
15:26 kdhananjay1 dgossage: check heal-info
15:26 kdhananjay1 dgossage: force-start the volume.
15:27 dgossage kdhananjay1: these should all still be ok to run with volume active looks like.  correct?
15:27 kdhananjay1 dgossage: monitor heal-info (it might momentarily say '/' is in split-brain, but that's ok, it will recover from that state in no time)
15:27 kdhananjay1 dgossage: yes, perfectly ok to have the iO going on.
15:27 dgossage kdhananjay1: other than brick that is killed
15:27 kdhananjay1 dgossage: ?
15:28 dgossage kdhananjay1: ok, ill re-run those steps on my test server now
15:28 kdhananjay1 dgossage: oh and don't forget to unmount the tmp dir once you're done ;)
15:28 dgossage kdhananjay1: nothing just ammending that volume would be active other than the brick process which I had killed
15:28 dgossage ok
15:28 kdhananjay1 dgossage: yeah, only the offline brick wont be witnessing IO.
15:30 * kdhananjay will be afk for a while
15:40 an_ joined #gluster
15:41 jbrooks joined #gluster
15:42 dgossage kdhananjay: so far working just fine.  healed 200 shards and still dropping down. vm that is active on volume still responding and can perform writes
15:51 hagarth joined #gluster
15:53 kdhananjay dgossage: OK, nice.
15:55 kdhananjay dgossage: did the heal complete?
15:56 dgossage kdhananjay: 85 shards to go
15:56 JoeJulian om: No, I did not find time to duplicate your problem. If you file a bug report and I can duplicate it, I'll add that information to your bug along with anything else I can find.
15:56 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
15:57 JoeJulian om: You never did tell me if that setting I had you try worked around the problem.
16:01 dgossage kdhananjay: finished.  no weird files leftover in .glusterfs/indices either
16:01 kdhananjay dgossage: excellent
16:02 JoeJulian dgossage: What problem just got solved? I don't see any beginning to this conversation.
16:02 d0nn1e joined #gluster
16:03 dgossage JoeJulian:  Had issues with full heals on a brick that was removed then recreated.  would not start heal on shards
16:03 dgossage JoeJulian: was fixing underlying raid for bricks
16:04 JoeJulian Ok, so we then just set the root dirty and self-heal fixed the rest... Good to know.
16:04 dgossage JoeJulian: once I confirm that it works on my prod server as it did on test I will update all steps that led to it and that were used to resolve in the email I had started in users list
16:05 dgossage kdhananjay: you are a rock star and tank you for hanging in with me while we figured this out
16:05 JoeJulian Ah, *that* thread. That thing was huge.
16:05 kdhananjay dgossage: glad it worked for you. :) I'll probably update the ML tomorrow morning. Feeling sleepy.
16:05 JoeJulian kdhananjay++
16:05 glusterbot JoeJulian: kdhananjay's karma is now 7
16:05 baojg joined #gluster
16:06 JoeJulian I agree kdhananjay is a rock star.
16:06 B21956 joined #gluster
16:06 dgossage yeah it started over a weekend so I got a little verbose emailing my thoughts out loud while figuring out what I had done wrong
16:07 kdhananjay lol, this is embarrassing.
16:08 an_ joined #gluster
16:10 crashmag joined #gluster
16:11 robb_nl joined #gluster
16:16 ivan_rossi joined #gluster
16:16 ivan_rossi left #gluster
16:46 Sebbo1 joined #gluster
16:57 mhulsman joined #gluster
17:06 baojg_ joined #gluster
17:08 raghuhg joined #gluster
17:11 hagarth joined #gluster
17:14 mhulsman1 joined #gluster
17:16 JoeJulian amye: Are you back from vacation?
17:16 amye you rang?
17:17 JoeJulian You never actually answered my question about whether your accounting department would balk...
17:17 amye Oh, I beg your pardon.
17:18 amye Let me look at my email again.
17:18 JoeJulian It's ok, you were already on vacation I think. I'm sure you had happier things on your mind.
17:18 JoeJulian Looked relaxing.
17:19 amye Responded, I think my email client misformatted things.
17:19 amye I got totally sunburnt! That never happens! :D
17:22 JoeJulian amye: Also, read that article a cc'd you on on twitter. It's got some great information which I don't even know if we can do with bugzilla.
17:23 JoeJulian where "we" is "you" because I don't even have access beyond a generic user.
17:24 amye Oh I see, is that the thing that Brian Profitt just posted?
17:24 JoeJulian yeah
17:25 amye I seee, and it's a Jono post as well
17:28 amye I will review this in more depth and see what else might be valuable in here, it's at least worth bringing up on the mailing lists.
17:34 raghuhg joined #gluster
17:41 shyam joined #gluster
17:44 an_ joined #gluster
17:47 rafi joined #gluster
17:52 skoduri joined #gluster
18:09 robb_nl joined #gluster
18:25 armin joined #gluster
18:55 crag joined #gluster
18:59 rastar joined #gluster
19:20 jri joined #gluster
19:29 jri_ joined #gluster
19:45 jri joined #gluster
19:49 mhulsman joined #gluster
20:19 jri joined #gluster
20:26 jri_ joined #gluster
20:31 k4n0 joined #gluster
20:43 prth joined #gluster
21:11 aj__ joined #gluster
21:34 congpine joined #gluster
21:35 congpine hi, one of the nodes in my replica has OS disk failed and I have re-built the OS. the brick is fine as it is on different raid system.
21:35 congpine I have followed this guide to use the same IP and re-add it to cluster and peering with other
21:35 congpine https://support.rackspace.com/how-to/recover​-from-a-failed-server-in-a-glusterfs-array/
21:35 glusterbot Title: Recover from a failed server in a GlusterFS array (at support.rackspace.com)
21:35 congpine I can run gluster volume on that node and see the volume. I have started the brick as well.
21:35 ttkg joined #gluster
21:36 congpine However, when I check from the client, I can't seem to run ls on the volume, I checked netstat and saw that connection to that node is TIME_WAIT
21:36 congpine tcp        0      0 192.168.1.204:944       192.168.1.190:24007      TIME_WAIT
21:36 congpine port 24007 on that node ( 192.168.1.190) is open and listening
21:37 congpine 1.204 is the gluster client.
21:37 congpine When I view from other gluster server, I saw the same result
21:37 congpine what am I missing ?
21:46 B21956 joined #gluster
21:57 congpine I have stopped glusterfs-server ( and glusterfsd) and the volume is now responsive.I can ls from the client . It looks like port 24007 is blocked, but I don't run any firewall. Its Debian 6 OS
22:03 gluytium joined #gluster
22:08 om gluster devs, any ideas on the bug I was talking about before in version 3.7.14?
22:09 om when rebuilding one brick on a replica 4 all gluster clients show the data of the new rebuilt brick (that is empty with no data and needs healing) instead of the 3 that have the data.
22:11 om at this point I have to build a new gluster cluster and rsync all data from all bricks over to the new cluster.  but I am worried about what gluster version to use because of this bug
22:11 om any advise?
22:21 Jacob843 joined #gluster
22:22 armyriad joined #gluster
22:32 shruti joined #gluster
22:53 delhage joined #gluster
23:12 shyam joined #gluster
23:17 pkalever joined #gluster
23:40 Peppard joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary