Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:33 haomaiwang joined #gluster
00:37 amye joined #gluster
00:41 shyam joined #gluster
00:43 zhangjn joined #gluster
00:51 F2Knight joined #gluster
01:06 zhangjn joined #gluster
01:07 EinstCrazy joined #gluster
01:23 JesperA joined #gluster
01:35 Lee1092 joined #gluster
01:51 overclk joined #gluster
02:00 julim joined #gluster
02:08 overclk joined #gluster
02:23 harish_ joined #gluster
02:26 fortpedro joined #gluster
02:26 haomaiwa_ joined #gluster
02:29 haomai___ joined #gluster
02:33 farhorizon joined #gluster
02:38 zhangjn joined #gluster
02:40 natarej_ joined #gluster
02:45 natarej joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:49 DV__ joined #gluster
02:55 overclk joined #gluster
03:01 haomaiwa_ joined #gluster
03:08 kdhananjay joined #gluster
03:10 gem joined #gluster
03:11 hagarth joined #gluster
03:18 volga629 Is anybody so this http://download.gluster.org/pub/​gluster/glusterfs-nagios/1.1.0/
03:18 glusterbot Title: Index of /pub/gluster/glusterfs-nagios/1.1.0 (at download.gluster.org)
03:19 volga629 for monitoring ?
03:22 overclk joined #gluster
03:30 gem joined #gluster
03:34 cliluw volga629: I think you accidentally a verb there.
03:34 volga629 I just looking monitoring plugin
03:36 chirino joined #gluster
03:37 volga629 How to use it this plugin ?
03:40 vmallika joined #gluster
03:44 atinm joined #gluster
03:59 sankarshan_ joined #gluster
03:59 ppai joined #gluster
04:00 shubhendu joined #gluster
04:01 haomaiwa_ joined #gluster
04:03 RameshN joined #gluster
04:07 kanagaraj joined #gluster
04:14 bharata-rao joined #gluster
04:17 ashiq joined #gluster
04:18 ashiq joined #gluster
04:26 kshlm joined #gluster
04:29 poornimag joined #gluster
04:29 farhorizon joined #gluster
04:35 sakshi joined #gluster
04:35 Manikandan joined #gluster
04:42 nehar joined #gluster
04:45 nbalacha joined #gluster
04:52 RameshN joined #gluster
04:56 kotreshhr joined #gluster
04:59 dusmant joined #gluster
05:01 haomaiwang joined #gluster
05:04 pppp joined #gluster
05:06 pppp joined #gluster
05:11 ndarshan joined #gluster
05:14 EinstCrazy joined #gluster
05:18 Javezim Anyone currently on ever had Glusterfs shared with Samba, and had Hyper-V VMs stored on it?
05:20 EinstCra_ joined #gluster
05:20 gem joined #gluster
05:23 zhangjn joined #gluster
05:26 Apeksha joined #gluster
05:31 JoeJulian Nope. Nobody uses hyper-v. ;)
05:34 marlinc joined #gluster
05:34 zhangjn_ joined #gluster
05:35 JoeJulian Javezim: Second time you've asked that question. Generally survey type questions fair pretty poorly in IRC channels. Are you having a problem implementing that?
05:36 Javezim Yeah it's an upcoming projects that I've almost given up on. ISCSI looks promising over Samba now so will probably just go with that.
05:37 Javezim But am still curious if anyone has ever implented it please let me know :)
05:37 hgowtham joined #gluster
05:38 Bhaskarakiran joined #gluster
05:38 jiffin joined #gluster
05:39 JoeJulian If I had a gun to my head and was told I had to work with hyperv, I would probably go the samba route. It can be made more resilient, imho.
05:39 aravindavk joined #gluster
05:41 JoeJulian Javezim: look at CTDB https://www.rackspace.com/knowledge_center/art​icle/glusterfs-high-availability-through-ctdb
05:41 glusterbot Title: GlusterFS high availability through CTDB | Knowledge Center | Rackspace Hosting (at www.rackspace.com)
05:41 anil joined #gluster
05:42 Javezim Thanks Joe
05:43 ndarshan joined #gluster
05:44 plarsen joined #gluster
05:45 zhangjn joined #gluster
05:48 kdhananjay joined #gluster
05:54 zhangjn joined #gluster
06:00 zhangjn joined #gluster
06:00 arcolife joined #gluster
06:00 vmallika joined #gluster
06:01 17WABHUQJ joined #gluster
06:02 zhangjn joined #gluster
06:08 vimal joined #gluster
06:10 mobaer joined #gluster
06:13 Humble joined #gluster
06:19 hagarth joined #gluster
06:19 karnan joined #gluster
06:31 gem joined #gluster
06:32 ramky joined #gluster
06:32 hgowtham_ joined #gluster
06:34 Manikandan joined #gluster
06:35 zhangjn joined #gluster
06:36 zhangjn joined #gluster
06:37 kaushal_ joined #gluster
06:42 auzty joined #gluster
06:45 ahino joined #gluster
06:46 Saravana_ joined #gluster
06:48 atinm joined #gluster
06:53 pdrakeweb joined #gluster
06:56 itisravi joined #gluster
06:56 d0nn1e joined #gluster
06:57 gem_ joined #gluster
07:03 thessy joined #gluster
07:04 zhangjn joined #gluster
07:07 haomaiwa_ joined #gluster
07:11 EinstCrazy joined #gluster
07:15 jtux joined #gluster
07:18 mhulsman joined #gluster
07:22 SOLDIERz joined #gluster
07:29 Manikandan joined #gluster
07:29 skoduri joined #gluster
07:29 EinstCrazy joined #gluster
07:30 atinm joined #gluster
07:41 deniszh joined #gluster
07:41 thessy hi, on replacing a brick (new hw and newly installed os) in a replicated 2-node-cluster (glusterfs 3.6.7) re-synchronisation does not work or even start
07:42 thessy log from brick: [2016-01-07 07:43:23.111294] W [server-resolve.c:437:resolve_anonfd_simple] 0-server: inode for the gfid (810eb5c6-6383-43cb-bd6e-d2580e759b09) is not found. anonymous fd creation failed
07:42 thessy whats going wrong here? what to can I do?
07:43 JoeJulian ~pastestatus | thessy
07:43 glusterbot thessy: Please paste the output of gluster peer status from more than one server to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
07:44 JoeJulian Might as well paste "gluster volume info" at the same time.
07:44 mobaer joined #gluster
07:46 deepakcs joined #gluster
07:48 JoeJulian Are you still there?
07:48 thessy jup
07:48 thessy output is coming
07:49 mhulsman joined #gluster
07:50 thessy http://fpaste.org/308082/53006145/
07:50 glusterbot Title: #308082 Fedora Project Pastebin (at fpaste.org)
07:59 nangthang joined #gluster
08:01 haomaiwa_ joined #gluster
08:03 thessy Are pasted outputs sufficient?
08:03 thessy Do you need more?
08:03 atalur joined #gluster
08:05 zhangjn joined #gluster
08:08 [Enrico] joined #gluster
08:08 ivan_rossi joined #gluster
08:09 JoeJulian Sorry, thessy, I forgot to come back and check on you. :D
08:09 JoeJulian Which one's new?
08:09 thessy vsh01
08:10 JoeJulian Ah, I see that line at the top. Missed that the first read.
08:10 thessy no prob
08:10 JoeJulian Maybe I shouldn't be doing this at midnight. ;)
08:11 JoeJulian So that looks fine and healthy. Does "gluster volume status" look like everything that should be is running?
08:13 thessy I would say, yes. But http://fpaste.org/308087/21543951/
08:13 glusterbot Title: #308087 Fedora Project Pastebin (at fpaste.org)
08:14 JoeJulian No active tasks... Let's try giving it one, "gluster volume heal storage1 full"
08:15 thessy root@vsh01:~# gluster volume heal storage1 full
08:15 thessy Launching heal operation to perform full self heal on volume storage1 has been successful
08:16 JoeJulian Any data moving?
08:16 kovshenin joined #gluster
08:17 thessy mh, gluster volume status still says 'no active volume taskst' but
08:17 thessy there is some progress going on
08:19 crashmag joined #gluster
08:19 mhulsman joined #gluster
08:20 thessy is this command new (coming from glusterfs 3.4)?
08:22 thessy there I readded the brick, mounted the gluster client and saw all files, after that a "find /storage1/ | xargs stat" triggered the self heal
08:23 JoeJulian The "gluster volume heal" command has been in since 3.2 iirc.
08:23 jvandewege Hello All. I've a gluster-3.7.6 setup around oVirt-3.6, replica 2 and I've got a rather strange problem. I've setup a volume with two bricks and mounting that volume is no problem, can write data to it. Problem is 'gluster volume status' which works on one server but not the other.
08:24 jvandewege It hangs for 1m and then timesout with an error. Running the same command on the other server is OK, gluster volume info on both is no problem.
08:24 JoeJulian Check the log files in /var/log/glusterfs for clues. Unfortunately, I seem to be falling asleep at my keyboard so I'm heading off to bed.
08:24 thessy JoeJulian: thanks, will observe the self healing process
08:25 jvandewege JoeJulian: sleep well
08:25 jvandewege I'll collect some logs and use fpaste.
08:28 thessy nope, glustershd.log.1 stats: "[2016-01-07 08:28:21.001108] W [rpc-clnt-ping.c:145:rpc_clnt_ping_cbk] 0-storage1-client-0: socket or ib related error"
08:29 fsimonce joined #gluster
08:30 ppai joined #gluster
08:31 DV joined #gluster
08:37 harish joined #gluster
08:47 d0nn1e joined #gluster
08:47 kshlm joined #gluster
08:50 vmallika joined #gluster
08:50 jiffin jvandewege: check the gluster peer status on the both nodes
08:51 atinm joined #gluster
08:51 jvandewege jiffin: gluster peer status is ok on both hosts
08:59 jiffin jvandewege: u may need to check the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the failed node which points out the error
08:59 Slashman joined #gluster
09:01 haomaiwa_ joined #gluster
09:05 aravindavk joined #gluster
09:09 kshlm joined #gluster
09:09 haomaiwa_ joined #gluster
09:11 spalai joined #gluster
09:12 poornimag joined #gluster
09:14 spalai left #gluster
09:19 autostatic Good morning. I just upgraded two GlusterFS servers that both hold a single brick to 3.5.7 (on Ubuntu 14.04). One of the reasons I did this was because the brick logs were crammed with the following errors:
09:19 autostatic marker_removexattr_cbk] 0-datavolume-marker: No data available occurred while creating symlinks
09:19 autostatic But this still happens :(
09:20 autostatic Anybody any pointers to troubleshoot this? Other than these errors the servers seem to work fine,
09:29 mhulsman1 joined #gluster
09:32 ashiq_ joined #gluster
09:38 Manikandan autostatic, can you give me the output of gluster v info
09:40 Manikandan autostatic, also if you are using glusterfs 3.5.7 on *Ubuntu* 14.04, we have some issues with enabling quota itself, I am debugging it and you can expect a fix sooner in one of the next upcoming minor release of 3.5
09:41 Manikandan autostatic, here is the bug we are tracking - https://bugzilla.redhat.co​m/show_bug.cgi?id=1117888
09:41 glusterbot Bug 1117888: medium, medium, ---, mselvaga, ASSIGNED , Problem when enabling quota : Could not start quota auxiliary mount
09:44 haomaiwang joined #gluster
09:51 atalur joined #gluster
09:59 zhangjn joined #gluster
10:01 autostatic Manikandan: Thanks!
10:02 chirino joined #gluster
10:02 Manikandan autostatic, can you mail me your glusterfs logs(/var/log/glusterfs/)?
10:03 autostatic Sure, where can I send them to?
10:04 karnan joined #gluster
10:04 autostatic Output of gluster v info: http://pastebin.com/vGaP4QeA
10:04 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
10:05 autostatic Check :) http://paste.ubuntu.com/14428912/
10:05 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
10:06 autostatic ANd found your mail address, logs are coming your way
10:07 haomaiwa_ joined #gluster
10:14 Manikandan autostatic, thanks a lot :-)
10:17 harish_ joined #gluster
10:18 ashiq joined #gluster
10:22 thessy perhaps someone can help me understanding glusterfs, while JoeJulian is sleeping ;-)
10:22 thessy glustershd.log says "Server and Client lk-version numbers are not same, reopening the fds"
10:22 glusterbot thessy: This is normal behavior and can safely be ignored.
10:23 thessy glusterbot: thanks ;-)
10:25 karnan joined #gluster
10:26 jvandewege jiffin: found the problem :-) Somehow 1 server had the other 2 times as a peer. Deleting one of the GUIDs from /var/lib/glusterfs/peers/ did the trick.
10:45 haomaiwa_ joined #gluster
10:53 jbrooks joined #gluster
10:54 autostatic Manikandan: mail has been sent
10:54 Manikandan autostatic, yes received it, thank you :)
11:00 natarej_ joined #gluster
11:03 kkeithley1 joined #gluster
11:09 haomaiwa_ joined #gluster
11:20 mhulsman joined #gluster
11:22 mhulsman1 joined #gluster
11:24 haomaiwa_ joined #gluster
11:27 Saravana_ joined #gluster
11:31 nangthang joined #gluster
11:34 liviudm_ joined #gluster
11:41 21WAAP3J4 joined #gluster
11:47 liviudm joined #gluster
11:50 poornimag joined #gluster
11:52 kotreshhr joined #gluster
11:52 mhulsman joined #gluster
11:54 haomaiwang joined #gluster
11:55 mobaer1 joined #gluster
11:56 hgowtham joined #gluster
11:58 zhangjn joined #gluster
11:59 zhangjn joined #gluster
11:59 zhangjn joined #gluster
12:01 zhangjn joined #gluster
12:03 rwheeler joined #gluster
12:05 Apeksha joined #gluster
12:06 haomaiwa_ joined #gluster
12:07 lanning joined #gluster
12:07 bluenemo joined #gluster
12:13 haomai___ joined #gluster
12:19 mobaer joined #gluster
12:21 kotreshhr left #gluster
12:24 Apeksha joined #gluster
12:38 haomaiwang joined #gluster
12:40 sankarshan_ joined #gluster
12:48 MessedUpHare joined #gluster
12:52 drankis joined #gluster
12:58 haomaiwa_ joined #gluster
13:01 haomaiwang joined #gluster
13:05 poornimag joined #gluster
13:06 zhangjn joined #gluster
13:09 onebree morning all
13:09 spalai joined #gluster
13:13 plarsen joined #gluster
13:15 haomaiwang joined #gluster
13:16 spalai left #gluster
13:19 ashka joined #gluster
13:20 Ethical2ak joined #gluster
13:21 yawkat joined #gluster
13:22 kdhananjay joined #gluster
13:23 mhulsman joined #gluster
13:29 Saravana_ joined #gluster
13:30 ira joined #gluster
13:32 R0ok_ joined #gluster
13:46 haomaiwa_ joined #gluster
13:51 ekzsolt joined #gluster
13:51 unclemarc joined #gluster
13:53 mhulsman joined #gluster
13:58 onebree Yesterday I tried to add --log-level=DEBUG to my mount statement. It said that such option did not exist.
13:59 edong23 joined #gluster
14:00 chirino joined #gluster
14:01 7YUAAKFB8 joined #gluster
14:01 EinstCrazy joined #gluster
14:02 shyam joined #gluster
14:06 unclemarc joined #gluster
14:12 MessedUpHare Hi everyone, i'm struggling to get GlusterFS + CTDB + Samba HA working on CentOS 7 using the CentOS SI packages
14:13 MessedUpHare I keep getting a smbd crash when a client connects from a windows box (SMB2_10 and/or NT1) if I run ctdb disable to simulate a node failure
14:13 MessedUpHare This works find from a Linux NT1 client
14:15 ackjewt Hello. We're trying to set up a 2 node replicated cluster with 1 arbiter node. Is this supported? I found some info on a forum that stated that this is not yet supported, still there are lots of documentation about it online?
14:16 ndevos onebree: you can pass that as a mount option, like: mount -t glusterfs -o log-level=DEBUG ....
14:17 onebree ndevos: Let me try that.
14:19 ndevos MessedUpHare: what version of samba are you using? maybe there is an update for it?
14:21 ndevos ackjewt: it is a relatively new feature and there have been quite some improvements, I thought it was supposed to be stable with the latest releases
14:22 mhulsman1 joined #gluster
14:22 MessedUpHare samba-4.2.3-10.el7.x86_64
14:23 onebree It worked, and here is the log -- https://gist.github.com/on​ebree/90bff7b0a5962f4c70f3
14:23 glusterbot Title: gs.log · GitHub (at gist.github.com)
14:23 MessedUpHare i've also tried with the rpms in the gluster-samba repo
14:24 thessy left #gluster
14:25 MessedUpHare which is samba-4.2.4-6.el7.centos.x86_64.rpm
14:25 ira joined #gluster
14:25 ndevos MessedUpHare: I'm no samba expert, maybe rjoseph, anoopcs, obnox or ira can point you in the right direction
14:27 MessedUpHare i'll have an ask in #samba too - see if anyone is there (i've already tried in #ctdb - but it douesn't appear anyone is around)
14:27 ackjewt ndevos: Ok, thanks. We're using 3.7.6 and when we mount using NFS, the mount shows up as 99GB (the space available on the arbiter node) instead of 33TB as we have available on the "real" nodes.
14:27 ndevos onebree: no idea what the logs mean... someone that knows more about dht might be able to help - maybe send an email to the gluster-users list to reach them?
14:28 jiffin joined #gluster
14:28 ira MessedUpHare: First thing, test the failover without smbd.
14:28 ira Just confirm the ips move correctly.
14:28 onebree ndevos: JoeJulian suggested that I post the debug log. Hopefully he appears later today to continue helping.
14:28 onebree All you guys here are awesome and helpful :-)
14:29 ndevos ackjewt: oh, that does not sounds to difficult to solve for the arbiter developers, can you file a bug for that?
14:29 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
14:29 MessedUpHare ira: I've confirmed that all fine
14:30 MessedUpHare ira: and it works fine from a Linux cifs (NT1) client
14:30 EinstCrazy joined #gluster
14:31 ackjewt ndevos: Yes, i'll try to do that!
14:31 ndevos ackjewt: thanks!
14:32 Telsin joined #gluster
14:33 shaunm joined #gluster
14:37 kotreshhr joined #gluster
14:37 spalai joined #gluster
14:37 spalai left #gluster
14:39 amye joined #gluster
14:41 ira MessedUpHare: Interesting...
14:41 ira MessedUpHare: Does samba without CTDB, with vfs_glusterfs work?
14:42 MessedUpHare Yeah, it all works fine even with CTDB until I issue ctdb disable/ctdb moveip or shutdown the server to simulate a crash/failure/maintainence
14:43 dgandhi joined #gluster
14:44 nbalacha joined #gluster
14:44 RameshN joined #gluster
14:45 Humble joined #gluster
14:47 mhulsman joined #gluster
14:54 B21956 joined #gluster
14:55 nehar joined #gluster
14:58 Slashman joined #gluster
15:00 farhorizon joined #gluster
15:00 skylar joined #gluster
15:00 ira does normal ctdb operation work w/o smbd?
15:01 ira (I'm not sure I got that part.)
15:01 haomaiwa_ joined #gluster
15:04 edong23_ joined #gluster
15:06 MessedUpHare ira: yep
15:06 MessedUpHare ira: although, I haven't been able to test it with nfs yet
15:06 MessedUpHare or winbind
15:08 Javezim joined #gluster
15:14 rafi joined #gluster
15:16 kdhananjay joined #gluster
15:19 aravindavk joined #gluster
15:22 ira MessedUpHare: Ok... Now I'm scratching my head ;).
15:22 ira obnox: You seen something like this?
15:22 nerdcore left #gluster
15:33 ahino joined #gluster
15:40 obnox ira: oh, I need to read up.. but currently on phone
15:41 skylar joined #gluster
15:41 shubhendu joined #gluster
15:42 kotreshhr joined #gluster
15:42 skoduri joined #gluster
15:45 DonLin ira: Yes, I tested ctdb w/o smbd and NFS failover works well
15:45 kovsheni_ joined #gluster
15:47 deniszh joined #gluster
15:50 bluenemo hi guys. I had two nodes in replica mode and want to add a third. I'm trying via:  gluster volume add-brick gfs_fin_web replica 3 web1:/srv/gfs_fin_web/brick  but I'm getting: volume add-brick: failed: Pre Validation failed on web1. /srv/gfs_fin_web/brick is already part of a volume. gluster volume status shows only web0 and web10 (the ones connected before)
15:52 bluenemo gluster volume status shows all peers connected: http://paste.debian.net/hidden/f21cd7a6/
15:52 glusterbot Title: Debian Pastezone (at paste.debian.net)
15:53 kshlm joined #gluster
15:54 neofob joined #gluster
15:54 deniszh joined #gluster
15:56 bowhunter joined #gluster
15:59 calavera joined #gluster
15:59 bluenemo force did the trick
16:00 bluenemo more gluster nodes you must add
16:00 bluenemo M)
16:01 ndevos hehe
16:01 haomaiwang joined #gluster
16:02 gem joined #gluster
16:08 bennyturns joined #gluster
16:12 papamoose joined #gluster
16:13 onebree ndevos: I may email gluster-dev. Is it a mailing list for people developing gluster itself? I am looking for help just using gluster
16:14 JoeJulian gluster-users is for users. -devel is for discussion of development.
16:14 ndevos onebree: gluster-users@gluster.org for users, gluster-devel@gluster.org for development stuff
16:14 ndevos onebree: you can subscribe by sending an email with subject "subscribe" to gluster-users-request@gluster.org
16:15 siel joined #gluster
16:16 onebree Okay.
16:16 onebree Hello JoeJulian!
16:17 rafi joined #gluster
16:19 JoeJulian Hah, I wonder why that's not critical. ENODEV on /dev/fuse seems pretty critical.
16:19 onebree Is that to me?
16:19 JoeJulian You, ndevos, the world in general.
16:20 JoeJulian https://gist.github.com/onebree/90b​ff7b0a5962f4c70f3#file-gs-log-L350
16:20 glusterbot Title: gs.log · GitHub (at gist.github.com)
16:20 sankarshan_ joined #gluster
16:21 JoeJulian That line says that opening /dev/fuse is what's returning ENODEV, which sems to be why it unmounts and fails.
16:21 ndevos yeah, no /dev/fuse will make mounting a little troublesome
16:22 ndevos onebree: is the fuse kernel module loaded? check with "lsmod | grep fuse"
16:22 onebree Where, local or remotely?
16:22 onebree Locally: fuse                   87741  3
16:23 JoeJulian mmkay, how about "file /dev/fuse"
16:23 JoeJulian This is on the client that's not mounting the volume.
16:24 onebree Wait, I thought the client is where I mount
16:24 onebree Like, the client is who has the main files, not the bricks
16:25 JoeJulian Sorry, let me rephrase: this is on the client that's failing to mount.
16:26 onebree So where I am running mount -t ?
16:26 onebree (I am fuzzy-brained today, please bear with me)
16:26 JoeJulian Yes. I need my espresso as well.
16:27 onebree I don't drink coffee, but for the last month or so I wish I did
16:28 csim it is never too late to come to the dark roasted side
16:29 hatchetj1ck joined #gluster
16:31 rwheeler joined #gluster
16:31 kdhananjay joined #gluster
16:32 onebree I remember in high school freshman who started drinking coffee at 14. I mean, that is too much. I am 21 but I would rather not become dependent on something that could make or break my day.
16:33 onebree I also hate the taste of coffee :P
16:33 spalai joined #gluster
16:33 mobaer joined #gluster
16:34 csim I tend to get lots of milk with it
16:35 onebree JoeJulian: Without quotes, it said character special. With quote, "file /dev/fuse" returned no such file or directory
16:36 onebree No matter the milk or cream, I always taste the coffee. Even in things with a shot of mocha or coffee-flavored.
16:37 JoeJulian Well that may explain why it's only a debug level log message.
16:38 TimRice joined #gluster
16:38 onebree How do get I /dev/fuse ?
16:38 onebree *how do I get
16:40 TimRice hey! is there a top limit on the ports assigned to a brick? we create a new snapshot of one of our volumes every night, and it seems that the ports being assigned now have gone out of the range of the port opening in our firewall...
16:41 TimRice just wondering what the actual range is that needs to be opened :) we have only 5 volumes + the snapshot that gets refreshed every night, yet the port being assigned is now default-offset + 200
16:41 JoeJulian onebree: according to those commands, it's there and is a device. Not yet sure how you could get that error in the logs. I'm reading source now trying to figure it out.
16:41 TimRice by default offset, i mean base-port
16:42 JoeJulian TimRice: 65535, I think. It'll just keep incrementing until it runs out of ports. I'm not sure what it does after that.
16:43 JoeJulian onebree: What version is that? The line numbers aren't matching up.
16:43 kkeithley_ After 65535 it starts using imaginary numbers
16:43 TimRice ok thanks JoeJulian :)
16:44 onebree JoeJulian: thank you very much. I will say this -- gluster WAS setup at one point on this machine, but months went by without touching it, and it was somehow upgraded. So I needed to uninstall and downgrade (which we went through last week).
16:44 kkeithley_ otherwise I hope we don't have a wrap-around bug
16:44 onebree Maybe the device it sees is the old mount I did in August?
16:45 TimRice is there anywhere else i could look/ask to find out what the behavior would be after 65535?
16:46 edong23 joined #gluster
16:46 JoeJulian No, onebree, /dev/fuse is the special device used to interface with the fuse module in the kernel.
16:47 TimRice i doubt we'll reach that number but.... would be nice to know
16:47 JoeJulian TimRice: https://github.com/gluster/glusterfs ;)
16:47 glusterbot Title: gluster/glusterfs · GitHub (at github.com)
16:47 TimRice thats what i was afraid of ;)
16:49 JoeJulian https://github.com/gluster/glusterfs/blob/master/x​lators/mgmt/glusterd/src/glusterd-pmap.c#L179-L201
16:49 glusterbot Title: glusterfs/glusterd-pmap.c at master · gluster/glusterfs · GitHub (at github.com)
16:50 JoeJulian TimRice: In summary, after 65535 it fails.
16:50 TimRice thanks very much! :)
16:50 kanagaraj joined #gluster
16:51 TimRice and i guess theres no way to force a certain port on a brick/bricks in a volume?
16:52 JoeJulian I usually answer that with a qualified, "no". Because there is, but it's a manual process. Once you learn enough about how gluster works to know how to do it, you generally choose not to.
16:52 jwd joined #gluster
16:53 spalai left #gluster
16:54 TimRice yeah, its already sounding like a bad idea :) i guess theres nothing to do but let the port number increase and hope that it never becomes a problem :D thanks again!
16:55 JoeJulian afaict, restarting glusterd will restart the port mapping at the first available port after the base.
16:55 JoeJulian @ports
16:55 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
16:57 harish_ joined #gluster
16:57 mhulsman joined #gluster
17:01 karnan joined #gluster
17:01 14WAAMK72 joined #gluster
17:08 atinm joined #gluster
17:09 drankis joined #gluster
17:11 ashka hi, I'm having a disconnection issue with glusterfs. I have a volume with a single brick on a machine that is connected at 10gbps, and at some point everyday 130 clients start writing data there, and they get disconnected (I have connection reset by peer in the logs on the brick), which breaks the file transfers. I am using gluster 3.5.2
17:11 nbalacha joined #gluster
17:15 ashka the network between the brick and the clients is basically local (same datacenter), so I don't think that's what's causing it, so I am left wondering if this is an issue with gluster or with my setup
17:16 thessy joined #gluster
17:17 onebree JoeJulian: dev fuse is part of the kernel? Okay, at least that's one less thing I need to wrry about installing
17:20 thessy left #gluster
17:34 F2Knight joined #gluster
17:42 mhulsman joined #gluster
17:44 thessy joined #gluster
18:01 spalai joined #gluster
18:01 haomaiwa_ joined #gluster
18:12 ilbot3 joined #gluster
18:12 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
18:15 bennyturns joined #gluster
18:18 ivan_rossi left #gluster
18:19 bfoster joined #gluster
18:27 ira joined #gluster
18:35 ndk joined #gluster
18:36 mhulsman joined #gluster
18:37 Rapture joined #gluster
18:42 emitor joined #gluster
18:43 kotreshhr left #gluster
18:52 lpabon joined #gluster
18:53 spalai joined #gluster
18:54 coredump joined #gluster
18:56 haomaiwa_ joined #gluster
19:16 thessy left #gluster
19:18 emitor Hi all!, I have 2 gluster servers in a replica 2 configuration. I've upgrade them from 3.4 to 3.7.6.1 and now I'm having some peer lock issues that get fixed if I restart one of the peers. Did you expirienced an ilike this? Could you give me some hint to keep investigating the root couse? Here is a part of the logs http://ur1.ca/oe35e
19:18 glusterbot Title: #308310 Fedora Project Pastebin (at ur1.ca)
19:20 JoeJulian What symptom are you seeing?
19:22 emitor when I use gluster volume status I receive Another transaction is in progress. Please try again after sometime.
19:23 emitor or Locking failed on 172.16.0.2. Please check log file for details.
19:24 JoeJulian How frequently are you running volume status?
19:25 emitor I have a nagios checking that every 2 or 3 minutes
19:25 JoeJulian Oh, well it's not that then....
19:26 emitor this nagios was checking this before and after the upgrade I have this alarm all day long hehe
19:27 emitor I've tried to change the cluster.op-version to 3 but this still happens
19:27 emitor after the upgrade it stay configured in 2
19:29 kovshenin joined #gluster
19:29 dlambrig_ joined #gluster
19:30 MACscr1 joined #gluster
19:33 malevolent_ joined #gluster
19:33 d-fence_ joined #gluster
19:33 JoeJulian Well, I'm trying it every 2 seconds and I can't make that happen.
19:33 ironhali1 joined #gluster
19:33 sloop- joined #gluster
19:34 bio___ joined #gluster
19:34 Pintomatic_ joined #gluster
19:34 JoeJulian I can't recall if 3 is a valid opver. I'm at 30704, myself.
19:35 janegil- joined #gluster
19:36 emitor I can try that op-ver
19:37 n-st_ joined #gluster
19:37 emitor I'm not 100% sure but this seems to happen after a self heal process finishes
19:37 emitor I'll confirm this in a minute when it finish a process that seems to be happening right now
19:38 JoeJulian I'd be surprised if it makes a difference changing the opver.
19:38 JoeJulian Do you have a lot of self-heals occurring?
19:38 Wojtek joined #gluster
19:38 ekzsolt joined #gluster
19:39 Ryllise joined #gluster
19:40 bennyturns joined #gluster
19:40 emitor right now there is one
19:40 bfoster joined #gluster
19:40 emitor we have some video transcoding happening on the storage
19:40 sadbox joined #gluster
19:41 JPaul joined #gluster
19:41 ackjewt joined #gluster
19:41 coreping_ joined #gluster
19:41 emitor I'm not sure why, after the transcoding finishes I get some self-healing process with these files
19:42 kbyrne joined #gluster
19:42 kkeithley1 joined #gluster
19:43 JoeJulian Is the transcoder not connecting to all the bricks? If there was a network issue where a client wasn't connecting to all the servers, it would still be able to write (unless you have quorum settings to prevent that) then anything it touches would be self-healed.
19:44 JoeJulian Otherwise, I'd guess it's a false positive, where the xattrs are set dirty while the transaction is still in process. I thought they'd found a way to prevent those false reports though.
19:50 emitor the transcoders connect throug NFS to gluster, I'm not sure of what quorum is but I've not setted anything but quota in some folders and the op-version now
19:51 JoeJulian @lucky glusterfs quorum
19:51 glusterbot JoeJulian: https://access.redhat.com/documentation/en-US/​Red_Hat_Storage/2.0/html/Administration_Guide/​sect-User_Guide-Managing_Volumes-Quorum.html
19:51 JoeJulian Meh, good enough to at least understand what it is.
19:55 emitor good to know but I don't have that setted up, this is what I'm getting right now
19:55 emitor [root@storage01 ~]# gluster vol heal fileserver info
19:55 emitor Brick storage01.ing:/storage/bricks/fileserver
19:55 emitor Number of entries: 1
19:55 emitor Brick sotrage02.ing:/storage/bricks/fileserver
19:55 emitor Number of entries: 1
19:57 emitor I guess that it's better here http://fpaste.org/308335/96608145/
19:57 glusterbot Title: #308335 Fedora Project Pastebin (at fpaste.org)
19:58 JoeJulian Ah yes, that makes sense. Too bad they couldn't be more certain.
19:58 emitor I don't know how to check if there is really a "Possibly undergoing heal"
19:59 nathwill joined #gluster
20:00 JoeJulian Yeah, there's no good logical way. It's more like: you know the write is happening. Both servers are showing the same file as possibly healing. It's probably a false positive. If only one server was listing that file, I'd be sure it was being healed.
20:02 skylar joined #gluster
20:05 emitor anyway It finished right now and there isn't any problem yet, so I guess that my theory was wrong
20:10 DV joined #gluster
20:11 haomaiwa_ joined #gluster
20:12 mhulsman joined #gluster
20:17 emitor I've changed the op-version to 30700 befor I've writed here, maybe that fixed the issue
20:18 emitor if that isn't the solution what should I look at?
20:20 JoeJulian If I was having the problem, I would compare the glusterd logs and match up the message lines. Whatever is causing the issue is probably at the very beginning of the string of errors.
20:21 JoeJulian If that doesn't show anything, I'd probably bump the logs to debug level and watch for the problem to start again and see if there's something obvious.
20:21 JoeJulian You can also dump the state of glusterd. I'm not sure if the state-dump command will dump glusterd, but if not SIGUSR1 will. /var/run/glusterd has to exist.
20:26 mhulsman1 joined #gluster
20:31 emitor Ok! I'm going to try that, I'm not sure about the state-dump thing but I'm looking into it
20:32 emitor Thanks a lot for your support @JoeJulian!
20:33 JoeJulian You're welcome
20:45 misc joined #gluster
21:14 haomaiwa_ joined #gluster
21:21 B21956 joined #gluster
21:39 F2Knight joined #gluster
21:51 B21956 joined #gluster
21:51 d-fence_ joined #gluster
21:51 owlbot joined #gluster
21:51 k-ma joined #gluster
21:51 ccha5 joined #gluster
21:51 Larsen_ joined #gluster
21:53 onebree joined #gluster
21:53 marlinc joined #gluster
21:53 onebree Hello. I do not think my messages went through
21:54 onebree JoeJulian:  how do I resolve the issue of /dev/fuse being mounted already, or something? Or should I just email gluster users?
21:56 onebree I read online that the Debian package fuse-utils helped. The only package name that looks similar for CentOS is fuse-devel. Do I need this installed?
21:57 mowntan joined #gluster
22:04 edong23 joined #gluster
22:07 ira joined #gluster
22:07 DV__ joined #gluster
22:10 JoeJulian nope
22:10 JoeJulian Don't get too stuck on that. Gluster debug logs are often full of red herrings.
22:16 haomaiwang joined #gluster
22:18 JoeJulian onebree: I started trying to find the answer, but I never did find an answer as to what version you're running.
22:19 ahino joined #gluster
22:24 squizzi_ joined #gluster
22:27 JoeJulian onebree: Ok, not a red herring. You're running 3.5.3 btw. 2 thoughts: Try again and see if there's anything in dmesg about it. Are you running your client in a container, openvz, lxc, etc?
22:32 skylar joined #gluster
22:36 emitor joined #gluster
22:38 amye joined #gluster
22:50 ahino joined #gluster
23:11 neofob left #gluster
23:15 d0nn1e joined #gluster
23:29 ahino joined #gluster
23:31 frakt joined #gluster
23:33 tsaavik left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary