Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-06-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 plarsen joined #gluster
00:06 Alghost joined #gluster
01:27 hvisage joined #gluster
01:32 Alghost joined #gluster
01:35 Alghost joined #gluster
01:38 Alghost_ joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:58 shdeng joined #gluster
02:08 zcourts joined #gluster
02:11 primehaxor joined #gluster
02:45 ppai joined #gluster
02:55 susant joined #gluster
03:04 kramdoss_ joined #gluster
03:09 zcourts joined #gluster
03:18 aravindavk joined #gluster
03:26 Peppard joined #gluster
03:35 riyas joined #gluster
03:35 Prasad joined #gluster
03:59 Shu6h3ndu joined #gluster
03:59 itisravi joined #gluster
04:04 vbellur joined #gluster
04:08 vbellur joined #gluster
04:09 vbellur1 joined #gluster
04:09 vbellur joined #gluster
04:10 vbellur joined #gluster
04:11 zcourts joined #gluster
04:17 kramdoss_ joined #gluster
04:17 poornima joined #gluster
04:20 sanoj|afk joined #gluster
04:24 Shu6h3ndu joined #gluster
04:33 ketarax joined #gluster
04:36 buvanesh_kumar joined #gluster
04:42 Prasad_ joined #gluster
04:49 susant joined #gluster
04:50 susant left #gluster
04:57 vbellur joined #gluster
04:57 sanoj joined #gluster
04:57 vbellur joined #gluster
04:58 vbellur joined #gluster
05:00 vbellur joined #gluster
05:09 Karan joined #gluster
05:11 zcourts joined #gluster
05:15 Prasad joined #gluster
05:20 vbellur joined #gluster
05:21 karthik_us joined #gluster
05:29 susant joined #gluster
05:30 ndarshan joined #gluster
05:30 apandey joined #gluster
05:32 ashiq joined #gluster
05:32 kdhananjay joined #gluster
05:41 JoeJulian srsc: No, that's not normal. Check your glustershd.log files. I'll follow up tomorrow, I'm off to bed.
05:46 Saravanakmr joined #gluster
05:51 riyas joined #gluster
05:53 hgowtham joined #gluster
05:58 apandey_ joined #gluster
06:02 prasanth joined #gluster
06:03 nigelb ^andrea^: you get nfs v4 with nfs-ganesha and nfs-ganesha also lets you setup HA (at least as far as I know, there could be more advantages)
06:07 bkunal joined #gluster
06:08 skumar joined #gluster
06:11 apandey_ joined #gluster
06:11 lkoranda joined #gluster
06:11 apandey joined #gluster
06:13 [diablo] joined #gluster
06:17 karthik_us joined #gluster
06:25 ankitr joined #gluster
06:29 ayaz joined #gluster
06:32 sona joined #gluster
06:37 jtux joined #gluster
06:39 sona joined #gluster
06:40 jtux joined #gluster
06:42 sona joined #gluster
06:44 Alghost joined #gluster
06:56 Wizek_ joined #gluster
06:57 kdhananjay joined #gluster
06:59 lalatenduM joined #gluster
07:08 skoduri joined #gluster
07:15 ivan_rossi joined #gluster
07:23 kdhananjay joined #gluster
07:30 vexoon I am currntly stuck with a very strange issue...gluster 3.8.12, ubuntu 16.04. when doing the follow command, we get strange results. gluster volume set test5 nfs.rpc-auth-allow 172.18.0.0/18 gives error -> is not a valid mount-auth-address.  gluster volume set test5 nfs.rpc-auth-allow 172.18.*.*/18 works... we can only reproduce it on this specifc set of machines, trying to build a repro on vagrant or any other machines, there gluster volume s
07:30 vexoon et test5 nfs.rpc-auth-allow 172.18.0.0/18 just works.
07:37 kdhananjay1 joined #gluster
07:37 kjackal joined #gluster
07:44 kdhananjay joined #gluster
07:45 vexoon https://github.com/gluster/glusterfs/blob/23930326e0378edace9c8c41e8ae95931a2f68ba/libglusterfs/src/common-utils.c#L2547 this tells me it should work...
07:45 glusterbot Title: glusterfs/common-utils.c at 23930326e0378edace9c8c41e8ae95931a2f68ba · gluster/glusterfs · GitHub (at github.com)
07:45 buvanesh_kumar_ joined #gluster
07:45 Shu6h3ndu_ joined #gluster
07:45 skoduri_ joined #gluster
07:45 Saravanakmr_ joined #gluster
07:45 ksandha_ joined #gluster
07:45 darshan joined #gluster
07:46 apandey joined #gluster
07:46 poornima_ joined #gluster
07:47 kdhananjay1 joined #gluster
07:48 sona joined #gluster
07:50 apandey joined #gluster
07:50 skumar joined #gluster
07:51 kramdoss_ joined #gluster
07:55 kdhananjay joined #gluster
07:58 kdhananjay1 joined #gluster
07:59 ^andrea^ thanks nigelb
08:00 ^andrea^ I thought NFS could already be version 4 as there is this comment in /etc/nfsmount.conf
08:00 ^andrea^ # Defaultvers=4
08:00 ^andrea^ even thought it's commented out by default...
08:07 rastar joined #gluster
08:12 kdhananjay joined #gluster
08:15 itisravi joined #gluster
08:21 ankitr joined #gluster
08:23 Sense8 joined #gluster
08:26 mbukatov joined #gluster
08:27 Klas commented out by default, in general, should mean that it's the default option, and thus doesn't need to be set
08:27 Klas not absolute rule, but general one
08:27 Klas no idea in this particular case
08:27 nigelb ^andrea^: Oh, I thought you meant compared to gNFS
08:27 nigelb which I think does not do v4
08:27 nigelb but I could be wrong.
08:31 ^andrea^ I think you are right nigelb
08:31 ^andrea^ the gluster NFS docs mention version 3...
08:32 ^andrea^ I meant gNFS, but I was reading the OS config file, rather thatn gluster's..
08:33 ^andrea^ all I'm trying to achieve is HA, and I assume that means I will need ganesha
08:34 vexoon Hm I cant seem to create a bugzilla account for gluster? We dug into the code and found an issue with CIDR in the nfs.rpc-auth-allow
08:34 ivan_rossi left #gluster
08:37 wiets joined #gluster
08:37 nigelb vexoon: what error do you get?
08:39 ^andrea^ last night I was testing a 3-node cluster
08:39 wiets Morning, i am trying to upgrade the hardware on a 2 node gluster, but i am unsure how to shutdown the node
08:40 ^andrea^ installed glusterfs client on another machine, mounted the share (from node1) as glusterfs and run a for loop to write lots of files..
08:41 ^andrea^ then I rebooted node2 (not node1) and the for loop hanged until node2 came back...
08:42 ^andrea^ what's the trick for making this HA please?
08:43 ^andrea^ I need to be able to reboot one single machine whenever necessary (eg for maintenance)
08:43 DV joined #gluster
08:45 Sense8 joined #gluster
08:47 ^andrea^ any tip/URL so that I understand what I'm doing wrong, would be much appreciated :-)
08:50 karthik_us joined #gluster
08:50 nigelb You probably want https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/ for the type of volume.
08:50 glusterbot Title: Setting Up Volumes - Gluster Docs (at gluster.readthedocs.io)
08:51 nigelb And this one for NFS-Ganesha https://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/
08:51 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.io)
08:53 ^andrea^ thnaks, so gluster without ganesha doesn't do HA (as per my test above)?
08:53 nigelb Gluster without ganesha does do HA.
08:53 nigelb But you don't get NFS access, I suspect.
08:53 nigelb (I could be wrong)
08:54 nigelb NFS access highly available, that is.
08:55 ^andrea^ OK, so I need to understand what I did wrong in my test above
08:55 ^andrea^ all I did was creating a brick1 on each node
08:55 ^andrea^ 3 nodes in total
08:55 nigelb what type of volume?
08:55 ^andrea^ and then created the cluster with replica set to 3
08:58 ^andrea^ gluster volume create gv0 replica 3 test-gfs1:/srv/glusterfs/bricks/brick1/gv0 test-gfs2:/srv/glusterfs/bricks/brick1/gv0 test-gfs3:/srv/glusterfs/bricks/brick1/gv0
08:58 ^andrea^ that's the command I used
08:58 nigelb Oh, I see what you mean.
08:58 nigelb You used one of the hostnames to connect
08:58 nigelb (the client)
08:58 nigelb and when that host went down, the client couldn't read/write.
08:58 nigelb Correct?
08:58 ^andrea^ I used test-gfs1 to mount, yes
08:59 ^andrea^ I did think that would not be HA (obviously), but I rebooted test-gfs2 (not test-gfs1)
08:59 nigelb oh. that didn't work either.
08:59 nigelb okay, that is strange.
08:59 nigelb I'll try to see if I can find someone who knows better than me :)
09:00 nigelb <-- not a gluster dev
09:00 glusterbot nigelb: <'s karma is now -29
09:01 ^andrea^ am I supposed to setup a floating IP to get HA, or the glusterfs client handles that (hosts going down)..
09:01 ^andrea^ ?
09:01 susant joined #gluster
09:01 ^andrea^ thanks again for you help nigelb
09:04 jkroon joined #gluster
09:05 nigelb ^andrea^: my colleague tells me there will be an interruption.
09:05 nigelb But after a minute or so, IO should continue.
09:06 ^andrea^ OK, so if I have to take a machine down for maintenance, there would be downtime... :-/
09:06 nigelb ^andrea^: gluster can do it's own HA
09:06 nigelb See https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/
09:06 glusterbot Title: Setting Up Clients - Gluster Docs (at gluster.readthedocs.io)
09:07 nigelb specifically backupvolfile-server
09:09 nigelb There should be no lost IO.
09:10 ^andrea^ but the application would hang
09:11 ^andrea^ I will need to further investigate that backupvolfile-server, thanks
09:11 ^andrea^ thanks for all the tips ;-)
09:12 nigelb It would hang briefly as we sync the IO
09:12 nigelb yes.
09:12 apandey joined #gluster
09:13 Seth_Karlo joined #gluster
09:13 Prasad_ joined #gluster
09:14 Seth_Karlo joined #gluster
09:23 susant left #gluster
09:26 susant joined #gluster
09:28 ankitr joined #gluster
09:29 susant joined #gluster
09:32 sona joined #gluster
10:10 Alghost joined #gluster
10:16 Prasad__ joined #gluster
10:17 Alghost joined #gluster
10:18 Alghost joined #gluster
10:19 Alghost joined #gluster
10:20 Alghost joined #gluster
10:20 Alghost joined #gluster
10:21 susant joined #gluster
10:21 Alghost joined #gluster
10:26 Alghost joined #gluster
10:36 vbellur1 joined #gluster
10:37 ankitr joined #gluster
10:37 vbellur1 joined #gluster
10:37 vbellur1 joined #gluster
10:38 vbellur1 joined #gluster
10:39 vbellur joined #gluster
10:40 vbellur joined #gluster
10:42 vbellur joined #gluster
10:44 bfoster joined #gluster
10:45 nbalacha joined #gluster
11:01 karthik_us joined #gluster
11:11 sona joined #gluster
11:25 kdhananjay joined #gluster
11:35 apandey_ joined #gluster
11:36 neferty joined #gluster
11:37 neferty hey guys, in the docs here: https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/ regarding "creating striped replicated volumes" you say that you only support this for map reduce workloads. what's meant by this?
11:37 glusterbot Title: Setting Up Volumes - Gluster Docs (at gluster.readthedocs.io)
11:38 WebertRLZ joined #gluster
11:44 hgowtham joined #gluster
11:47 Alghost joined #gluster
11:56 Alghost joined #gluster
11:57 Shu6h3ndu_ joined #gluster
11:58 cloph @stripe
11:58 glusterbot cloph: (#1) Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes., or (#2) The stripe translator is deprecated. Consider enabling sharding instead.
12:17 msvbhat joined #gluster
12:21 plarsen joined #gluster
12:31 skumar joined #gluster
12:35 karthik_ joined #gluster
12:35 poornima__ joined #gluster
12:35 Shu6h3ndu__ joined #gluster
12:35 bsivasub__ joined #gluster
12:36 susant joined #gluster
12:36 ksandha_ joined #gluster
12:36 ahino joined #gluster
12:36 ndarshan joined #gluster
12:37 rastar joined #gluster
12:38 shruti joined #gluster
12:40 lalatenduM joined #gluster
12:42 kdhananjay1 joined #gluster
12:48 kdhananjay joined #gluster
12:50 neferty cloph: i can't find any docs on sharding in the latest docs, only in the docs of older versions
12:56 atinm joined #gluster
13:01 vbellur joined #gluster
13:01 vbellur1 joined #gluster
13:02 susant joined #gluster
13:11 ahino joined #gluster
13:15 ahino joined #gluster
13:16 guhcampos joined #gluster
13:18 vbellur joined #gluster
13:18 vbellur joined #gluster
13:19 vbellur joined #gluster
13:20 vbellur joined #gluster
13:20 vbellur joined #gluster
13:21 susant joined #gluster
13:21 vbellur joined #gluster
13:24 ahino joined #gluster
13:24 vbellur joined #gluster
13:26 ahino1 joined #gluster
13:33 kramdoss_ joined #gluster
13:53 vbellur joined #gluster
13:54 susant left #gluster
14:02 ahino joined #gluster
14:05 msvbhat joined #gluster
14:08 skoduri joined #gluster
14:09 bowhunter joined #gluster
14:11 kpease joined #gluster
14:14 farhorizon joined #gluster
14:30 Seth_Karlo joined #gluster
14:31 farhorizon joined #gluster
14:32 ankitr joined #gluster
14:35 bhakti joined #gluster
14:47 farhorizon joined #gluster
14:56 jbrooks joined #gluster
15:09 Karan joined #gluster
15:15 susant joined #gluster
15:15 Wizek_ joined #gluster
15:18 zcourts joined #gluster
15:20 bowhunter joined #gluster
15:32 farhoriz_ joined #gluster
15:41 Gambit15 Hey guys, I've got a problem with a peer not syncing correctly. If anyone's around to lend an eye, it'd be much appreciated :)
15:55 zcourts joined #gluster
15:59 srsc left #gluster
15:59 sona joined #gluster
16:02 rastar joined #gluster
16:05 srsc joined #gluster
16:16 ankitr joined #gluster
16:18 srsc JoeJulian: here's my glustershd.log excerpt from the first node (self-heal daemon not starting on any node) https://paste.debian.net/plain/971206
16:28 JoeJulian srsc: You'll need to solve "connection to ::1:24007 failed (Connection refused)". If that's not because glusterd isn't listening, it's iptables or something.
16:32 TBlaar2 joined #gluster
16:52 timg__ joined #gluster
16:53 timg__ hi, when my client just get an empty file, but the file with content is on the brick, what could be wrong?
16:55 JoeJulian timg__: That shouldn't happen (obviously). Check the client log (/var/log/glusterfs/${mount_directory/./_}.log) and the heal status (gluster volume heal $volname info).
16:56 timg__ i already triggered a full heal, gluster also says it is not in a split brain situation. i've this issue just with a few files
17:00 Gambit15 joined #gluster
17:01 JoeJulian Check what paths it has with "getfattr -n trusted.pathinfo -e text $broken_file" to see if you can find a problem with those brick copies.
17:06 ankitr joined #gluster
17:07 timg__ JoeJulian: i've to try this tomorrow, i'll compare the trusted.pathinfo with files that are working
17:08 JoeJulian good luck
17:13 shruti` joined #gluster
17:13 sac joined #gluster
17:14 jbrooks joined #gluster
17:15 ^andrea^ joined #gluster
17:19 jbrooks joined #gluster
17:22 WebertRLZ hello guys
17:22 msvbhat joined #gluster
17:23 WebertRLZ I want to move a brick to another partition on the disk, but to remove the current brick from a volume gluster says that I must change the replica count first
17:23 WebertRLZ I can't find how to do that
17:26 JoeJulian What I like to do is to just kill the brick server for that brick (just kill the process serving the brick whose path you want to change), mount it to the new path, then "gluster volume replace-brick $volname $old_brick_path $new_brick_path commit force"
17:26 Seth_Kar_ joined #gluster
17:28 msvbhat joined #gluster
17:31 WebertRLZ JoeJulian, right, that will work without the old brick's process running?
17:32 farhorizon joined #gluster
17:33 toloughl joined #gluster
17:34 JoeJulian WebertRLZ: yes. Consider if the server had caught fire. It's the same process just without the insurance claim. ;)
17:39 srsc JoeJulian: I notice that the self heal daemon is trying to connect to ::1:24007, but netstat is showing that port 24007 is only available via tcp/ipv4
17:40 srsc https://paste.debian.net/plain/971218
17:41 srsc i'm looking around for an option to either force shd to use ipv4 or make glusterd also use ipv6, but if anyone knows offhand...
17:41 JoeJulian srsc: Stupid debian. :P I always forget they split their stack.
17:42 jbrooks joined #gluster
17:42 JoeJulian I /think/ you can just change localhost in /etc/hosts to not include the ipv6 address.
17:45 srsc JoeJulian: ok, removing localhost from ::1 in /etc/hosts and restarting the volume started the self heal daemon on that node
17:45 srsc to your knowledge is there a way to force shd to use ipv4?
17:48 JoeJulian shd is connecting to "localhost" so the way to force it is to ensure that the hostname resolves that way.
17:50 Wizek_ joined #gluster
17:52 srsc ok, thanks. i tried to add "transport.address-family: inet" to volume glustershd in /var/lib/glusterd/glustershd/glustershd-server.vol but it just got removed when the volume starts
17:52 srsc thanks for your help, much appreciated
17:53 JoeJulian you're welcome
17:53 JoeJulian theoretically there's probably a way to do it if you run glusterd in it's own netns.
17:55 WebertRLZ Thanks JoeJulian
17:56 JoeJulian You're welcome as well. :)
17:58 repnzscasb joined #gluster
17:58 repnzscasb joined #gluster
18:01 patromo joined #gluster
18:11 patromo hi :)
18:20 ankitr joined #gluster
18:38 guhcampos joined #gluster
18:38 ahino joined #gluster
18:41 bowhunter joined #gluster
18:55 timg__ after updating the gluster client from 3.7.x to 3.10.x i cant mount (fuse) anymore (server is running gluster 3.10.x), in the logs i found "E [socket.c:3217:socket_connect] 0-xxxxx-client-2: connection attempt on xxx.xxx.xxx.xxx:24007 failed, (Cannot assign requested address)". the port is open, telnet works .... any idea? if i downgrade the client everything works.
19:00 JoeJulian Hmm, "Cannot assign requested address)" is the culprit. Since it's client-side, that's strange. Like your machine has used up all the available ports - which seems unlikely.
19:16 wolsen joined #gluster
19:30 msvbhat joined #gluster
19:37 plarsen joined #gluster
19:41 zcourts joined #gluster
20:21 msvbhat_ joined #gluster
20:26 fsimonce joined #gluster
20:30 farhoriz_ joined #gluster
20:47 farhorizon joined #gluster
20:54 farhorizon joined #gluster
20:55 jbrooks joined #gluster
21:01 farhorizon joined #gluster
21:07 XpineX joined #gluster
21:11 jbrooks joined #gluster
21:14 kpease joined #gluster
21:41 farhorizon joined #gluster
22:14 john51 joined #gluster
22:40 bios_l joined #gluster
22:42 bios_l cli/Makefile.am:3: warning: compiling 'gluster-block.c' with per-target flags requires 'AM_PROG_CC_C_O' in 'configure.ac'       ............. got this error while ./autogen.sh while installing gluster block from source...
22:42 bios_l automake: warnings are treated as errors
22:47 JoeJulian bios_l: Might want to hit up gluster-dev and see if there's anybody responding. If not, I'd take that particular question to the gluster-devel mailing list. The users list deals more with pre-packaged installs.
22:48 bios_l okay
22:48 bios_l thanks
22:48 JoeJulian You're welcome.
22:48 JoeJulian btw, most of the devs are in Bangalore so keep that in mind when considering timezones and response times.
22:49 bios_l okay
23:34 Alghost joined #gluster
23:37 Alghost joined #gluster
23:37 Alghost joined #gluster
23:41 Alghost_ joined #gluster
23:44 cornfed78 joined #gluster
23:45 srsc left #gluster
23:57 Alghost joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary