Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:36 Teraii cyberbootje1, i have tested it and found a real issue
00:36 Teraii ls the file when uploading
00:36 Teraii break the file
00:37 cyberbootje1 oh that's not good
00:37 Teraii even when upload finishing
00:37 Teraii yes
00:37 Teraii you can test
00:37 Teraii create a mirror brick
00:37 Teraii mount nfs
00:37 Teraii upload large file (like 600Mb)
00:38 Teraii when uploading make ls of the directory
00:38 Teraii when upload finished check sha1 of the file :)
00:38 Teraii (file is truncated)
00:38 cyberbootje1 that's in the latest version?
00:39 cyberbootje1 no fix?
00:39 Teraii and worst the second server synchronise the bad version
00:39 Teraii yes the version if the actual port's version
00:40 Teraii glusterfs-3.9.0_1
00:41 Teraii (i'm on freebsd11)
00:42 cyberbootje1 ok, from what i see in pkg it's version 3.7.6_1
00:45 bglick joined #gluster
00:47 vbellur Teraii: could you file a bug please?
00:47 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
00:49 Teraii yes the page is opened since 10 days ago
00:50 Teraii i'm destacking urgent urgent urgent theese weeks :)
00:50 vbellur joined #gluster
00:51 vbellur joined #gluster
00:51 bglick We've been slowly upgrading from glusterfs37 to glusterfs39 on a distributed dispersed set of 5x(2+1).  Today we had 2 of one set upgraded to 39 and started having issues with glusterfs dieing on one of the servers in a set, after getting 'too many open files' errors.  We decided to go ahead and upgrade the 3rd server in the set and still same issues. Any advice?
00:51 vbellur joined #gluster
00:52 vbellur joined #gluster
00:53 vbellur joined #gluster
00:53 mallorn joined #gluster
00:54 vbellur joined #gluster
00:54 farhorizon joined #gluster
00:57 mallorn bglick, glusterd lasted about 23 seconds that time before crashing.
01:03 cyberbootje1 Teraii, ls on the gluster server?
01:04 Teraii on the mouted dir
01:04 Teraii where you are uploading the file
01:04 cyberbootje1 ugh that's awfull
01:04 Teraii probably a stats issue
01:04 Teraii linux stats and freebsd stats are very differents
01:05 Teraii (says on this chan some days ago)
01:05 farhorizon joined #gluster
01:08 JoeJulian bglick: last time I saw that issue was 2007. :(
01:09 Teraii JoeJulian, on linux ?
01:09 JoeJulian Yeah, but that was a bug that's long since been fixed.
01:09 JoeJulian I haven't seen that with any version since.
01:10 Teraii the stats method must be rewrited i think
01:10 JoeJulian left #gluster
01:10 JoeJulian joined #gluster
01:10 JoeJulian oops
01:10 Teraii :)
01:11 mallorn I'm working with bglick, so I'm going to jump in with some info...
01:11 JoeJulian Teraii: I'm referring to the "too many open files" error.
01:11 Teraii ha ok :)
01:11 shdeng joined #gluster
01:11 mallorn We were running 3.7 and were getting continuous errors like this every once in a while:
01:11 mallorn 0-nova-server: 212126: OPEN <gfid:8ff47331-686f-4bb4-a5e2-7563a0d6804d> (8ff47331-686f-4bb4-a5e2-7563a0d6804d) ==> (Too many open files) [Too many open files]
01:13 shutupsquare joined #gluster
01:13 Teraii creating a benchmark can be a good idee
01:13 Teraii :)
01:13 mallorn It happened every couple of weeks, but today it hit hard.  Suddenly glusterd was consuming every available filehandle within seconds and glusterd would crash with this:
01:13 mallorn [posix-helpers.c:1836:posix​_health_check_thread_proc] 0-nova-posix: still alive! -> SIGTERM
01:13 mallorn [glusterfsd.c:1328:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f01fb495dc5] -->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xe5) [0x7f01fcb2acf5] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x7f01fcb2ab6b] ) 0-: received signum (15), shutting down
01:13 Teraii releasing only if bin pass'd the bench :)
01:13 glusterbot mallorn: ('s karma is now -175
01:14 mallorn We upgraded that set to 3.9, but the problem still continues.
01:14 mallorn Sorry if I wasn't supposed to paste that here  I can paste to pastebin or something isntead.
01:15 mallorn We increased the number of file handles to over 1,000,000 but they still get consumed within about 20-30 seconds, and then glusterd crashes again.
01:18 gyadav joined #gluster
01:18 JoeJulian wow... strace, maybe, to see what files are getting opened?
01:19 JoeJulian (and pasting one or two lines isn't a big deal, if you need more consider ,,(paste) )
01:19 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
01:20 JoeJulian Oooh, mallorn do you have a lot of clients?
01:20 JoeJulian and a replicated volume?
01:21 mallorn It's not replicated at all, but is a distributed-disperse volume (5 x (2 + 1)) with about 30 clients.
01:21 kramdoss_ joined #gluster
01:22 JoeJulian Well that doesn't seem like it should be it.
01:26 mallorn What's weird is we'll see something like 281,000 file handles open to the same file.  There are only about 1000 files in our volume (although they can be large).
01:31 BitByteNybble110 joined #gluster
01:32 BitByteNybble110 Trying to do a clean install of Gluster 3.9 on CentOS 7.  yum install glusterfs-server glusterfs-ganesha -y throws Error: Package: glusterfs-ganesha-3.9.0-2.el7.x86_64 (centos-gluster39) Requires: /usr/lib/ocf/resource.d/portblock
01:39 BitByteNybble110 resource-agents-3.9.5-82.el7 is installed but /usr/lib/ocf/resource.d/portblock is missing
01:39 mallorn This looks relevant, BitByteNybble110:  https://bugzilla.redhat.co​m/show_bug.cgi?id=1389293
01:39 glusterbot Bug 1389293: unspecified, unspecified, ---, kkeithle, CLOSED CANTFIX, build: incorrect Requires: for portblock resource agent
01:40 pjrebollo joined #gluster
01:40 bglick Here’s a snippet of some brick log errors in trace mode: https://paste.fedoraproject.org/paste/EV9​a1LU~sdJg3ii~Yvo7D15M1UNdIGYhyRLivL9gydE=
01:40 glusterbot Title: GlusterFS Bricks Trace Errors - Modern Paste (at paste.fedoraproject.org)
02:23 d0nn1e joined #gluster
02:38 mallorn We've failed one brick, so that disperse set is now running with only two remaining bricks.  One is OK, but the other always errors out.
02:46 plarsen joined #gluster
02:53 cholcombe joined #gluster
02:56 bglick Are there any good tools to reconstruct glusterfs data?  Or rebuild a dead brick from it’s local files?
02:59 derjohn_mob joined #gluster
03:02 Gambit15 joined #gluster
03:11 mallorn I have the same question as bglick.
03:11 melliott joined #gluster
03:28 magrawal joined #gluster
03:34 kramdoss_ joined #gluster
03:46 atinm joined #gluster
03:52 prasanth joined #gluster
03:55 nbalacha joined #gluster
04:00 gyadav joined #gluster
04:13 aravindavk joined #gluster
04:16 itisravi joined #gluster
04:17 buvanesh_kumar joined #gluster
04:18 bglick joined #gluster
04:19 vbellur joined #gluster
04:27 dominicpg joined #gluster
04:38 ankitr joined #gluster
04:44 rafi joined #gluster
04:46 rafi1 joined #gluster
04:52 susant joined #gluster
04:55 Prasad joined #gluster
04:57 dominicpg joined #gluster
04:58 karthik_us joined #gluster
04:59 dgandhi joined #gluster
04:59 jiffin joined #gluster
05:01 Shu6h3ndu joined #gluster
05:03 kotreshhr joined #gluster
05:07 mat__ joined #gluster
05:08 BitByteNybble110 joined #gluster
05:12 Shu6h3ndu joined #gluster
05:13 ndarshan joined #gluster
05:14 rafi1 joined #gluster
05:24 gyadav joined #gluster
05:31 RameshN joined #gluster
05:32 apandey joined #gluster
05:34 ppai joined #gluster
05:35 skoduri joined #gluster
05:36 susant joined #gluster
05:43 karthik_us joined #gluster
05:43 sanoj joined #gluster
05:45 skumar joined #gluster
05:46 riyas joined #gluster
05:46 itisravi joined #gluster
05:46 ankitr_ joined #gluster
05:46 Saravanakmr joined #gluster
05:48 rjoseph joined #gluster
05:51 rafi1 joined #gluster
05:51 msvbhat joined #gluster
05:53 buvanesh_kumar joined #gluster
05:56 poornima joined #gluster
06:05 sbulage joined #gluster
06:07 jkroon joined #gluster
06:15 suliba joined #gluster
06:19 karthik_us|afk joined #gluster
06:22 hgowtham joined #gluster
06:25 kdhananjay joined #gluster
06:26 Karan joined #gluster
06:26 msvbhat joined #gluster
06:30 mb_ joined #gluster
06:33 Philambdo joined #gluster
06:35 mallorn We figured it out.  Two clients were running slightly older versions of the glusterfs fuse client and they were making the servers go crazy.  Upgrading to a newer fuse/glusterfs and rebooting those two bad nodes allowed the server to continue running without running out of file handles.
06:37 mallorn We firewalled the bad hosts for testing; if the firewall was removed for more than 30 seconds the server would allocate over 1,000,000 file handles and then die.
06:40 k4n0 joined #gluster
06:45 sona joined #gluster
06:58 rafi joined #gluster
07:11 rastar joined #gluster
07:15 xavih mallorn: this could be caused by this bug: https://bugzilla.redhat.co​m/show_bug.cgi?id=1344396
07:15 glusterbot Bug 1344396: medium, unspecified, ---, xhernandez, CLOSED CURRENTRELEASE, fd leak in disperse
07:16 xavih mallorn: it should be fixed from 3.7.12. Were your bad clients running a previous version ?
07:23 mhulsman joined #gluster
07:27 skumar_ joined #gluster
07:28 rafi1 joined #gluster
07:31 mbukatov joined #gluster
07:39 k4n0 joined #gluster
07:40 thessy joined #gluster
07:43 mhulsman1 joined #gluster
07:43 ashiq joined #gluster
07:45 ivan_rossi joined #gluster
07:53 mhulsman joined #gluster
08:00 thessy need little help on self healing in a 2 node replicate setup with fuse clients. upgraded from 3.7.x to 3.8.9 but self healing seems not to work. "find . | xargs stat" does not work anymore. are there any changes from 3.7 to 3.8?
08:06 mhulsman joined #gluster
08:08 arpu joined #gluster
08:10 thessy "gluster volume get storage2 all | grep heal" looks god, "heal" options are "on"
08:11 thessy but glustershd.log says " [afr-self-heald.c:479:afr_shd_index_sweep] 0-storage2-replicate-0: unable to get index-dir on storage2-client-2"
08:11 thessy any hints to solve this?
08:16 shutupsquare joined #gluster
08:16 fubada joined #gluster
08:17 Humble joined #gluster
08:17 moneylotion joined #gluster
08:18 mhulsman joined #gluster
08:20 k4n0 joined #gluster
08:27 fsimonce joined #gluster
08:28 dspisla joined #gluster
08:29 dspisla Hello guys ! Is there a way to have one global namespace with multiple gluster volumes?
08:29 cloph_away joined #gluster
08:36 sona joined #gluster
08:39 [diablo] joined #gluster
08:45 the-me joined #gluster
08:49 karthik_us joined #gluster
08:49 skumar joined #gluster
08:54 mhulsman1 joined #gluster
09:07 mhulsman joined #gluster
09:16 Philambdo joined #gluster
09:16 flying joined #gluster
09:20 jiffin1 joined #gluster
09:24 kdhananjay joined #gluster
09:29 dominicpg joined #gluster
09:31 Seth_Karlo joined #gluster
09:35 k4n0_ joined #gluster
09:37 Seth_Karlo joined #gluster
09:37 derjohn_mob joined #gluster
09:52 jiffin joined #gluster
09:54 mhulsman1 joined #gluster
09:56 k4n0 joined #gluster
09:59 Prasad_ joined #gluster
09:59 ppai joined #gluster
10:07 mhulsman joined #gluster
10:17 shutupsquare joined #gluster
10:31 msvbhat joined #gluster
10:37 Prasad__ joined #gluster
10:44 masuberu joined #gluster
10:53 social joined #gluster
10:59 ShwethaHP joined #gluster
11:02 buvanesh_kumar joined #gluster
11:06 susant joined #gluster
11:08 xp joined #gluster
11:09 ppai joined #gluster
11:10 ashiq joined #gluster
11:18 gyadav joined #gluster
11:37 poornima_ joined #gluster
11:38 pjrebollo joined #gluster
11:44 flyingX joined #gluster
11:44 thessy left #gluster
11:47 thessy joined #gluster
11:48 [fre] joined #gluster
11:49 [fre] hi devs... I'm having this request about creating shares and dirs. Could you advise about the amount and performance-impact?
11:50 jiffin joined #gluster
11:50 [fre] - /srv/project/var/log
11:50 [fre] - /srv/project/var/tmp
11:50 [fre] - /srv/project/project/www/
11:50 [fre] - /srv/project/project/var/
11:51 [fre] both vars would require around 10G and the projects would be around 100GB
11:52 [fre] how would you create replicated shares ? 1 , 2 or 4?
11:55 msvbhat joined #gluster
11:55 [fre] is there a good reason to justify putting all of those on different shares?
11:55 BatS9_ joined #gluster
12:01 RameshN joined #gluster
12:11 msvbhat joined #gluster
12:28 nthomas joined #gluster
12:31 poornima joined #gluster
12:35 social joined #gluster
12:45 k4n0 joined #gluster
12:47 side_control joined #gluster
12:49 _nixpanic joined #gluster
12:49 _nixpanic joined #gluster
12:49 poornima joined #gluster
13:04 Seth_Karlo joined #gluster
13:04 jkroon joined #gluster
13:05 Seth_Karlo joined #gluster
13:17 kotreshhr left #gluster
13:27 mhulsman1 joined #gluster
13:30 mhulsman joined #gluster
13:32 unclemarc joined #gluster
13:38 baber joined #gluster
13:40 msvbhat joined #gluster
14:03 ankitr_ joined #gluster
14:19 plarsen joined #gluster
14:19 cholcombe joined #gluster
14:20 skylar joined #gluster
14:21 ankitr joined #gluster
14:21 Can joined #gluster
14:25 Can Hi all, finding a quick solution to break two hosts gluster replication. any advice?
14:26 aravindavk joined #gluster
14:28 thessy left #gluster
14:29 Gambit15 Kill glusterd on one of the hosts...
14:30 susant joined #gluster
14:34 squizzi joined #gluster
14:38 Can Thanks Gambit15. But files are already syncronizing
14:38 Can although killing glusterd on both server
14:43 nbalacha joined #gluster
14:47 BatS9_ Purge gluster on one of them and change IP :)
14:47 Can sorry, I stopped the service not killed. after killing it seems not working :)
14:48 Can Changing IP isnt good especially for customer document
14:49 Can but thx for the advice :)
14:52 ira joined #gluster
14:57 moneylotion joined #gluster
14:59 ankitr joined #gluster
15:02 susant left #gluster
15:02 sbulage joined #gluster
15:03 sona joined #gluster
15:04 kramdoss_ joined #gluster
15:36 sona joined #gluster
15:38 mhulsman1 joined #gluster
15:39 mhulsman joined #gluster
15:46 [diablo] joined #gluster
15:48 kpease joined #gluster
16:04 ShwethaHP joined #gluster
16:09 moneylotion joined #gluster
16:11 farhorizon joined #gluster
16:21 hybrid512 joined #gluster
16:22 moneylotion joined #gluster
16:23 Gambit15 joined #gluster
16:32 moneylotion howdy gluster peeps, my volumes aren't mounting at boot (fstab)... i can fusermount -uz <volume> and mount -a, and the voluhme will mount, but not at boot
16:32 moneylotion any ideas?
16:37 msvbhat joined #gluster
16:40 riyas joined #gluster
16:41 Humble joined #gluster
16:45 cloph get more patience..
16:57 d0nn1e joined #gluster
16:58 kramdoss_ joined #gluster
17:00 moneylotion joined #gluster
17:00 bglick joined #gluster
17:02 sona joined #gluster
17:04 moneylotion joined #gluster
17:10 jdossey joined #gluster
17:17 moneylotion joined #gluster
17:19 squizzi joined #gluster
17:24 Seth_Karlo joined #gluster
17:26 JoeJulian moneylotion: Check your client logs to see why they're not succeeding. /var/log/glusterfs/${mountpath with / replaced with _}
17:26 JoeJulian .log
17:36 yosafbridge joined #gluster
17:40 msvbhat joined #gluster
17:40 ivan_rossi left #gluster
17:50 squizzi joined #gluster
17:53 oajs joined #gluster
17:54 aronnax joined #gluster
17:57 rastar joined #gluster
18:02 jkroon joined #gluster
18:02 farhorizon joined #gluster
18:15 cholcombe joined #gluster
18:18 ShwethaHP left #gluster
18:26 farhorizon joined #gluster
18:29 kpease joined #gluster
18:32 ahino joined #gluster
18:32 benergy joined #gluster
18:32 benergy left #gluster
18:38 d0nn1e joined #gluster
18:45 kpease joined #gluster
18:56 riyas joined #gluster
19:12 Jacob843 joined #gluster
19:30 DV joined #gluster
19:31 rwheeler joined #gluster
19:42 cholcombe joined #gluster
19:44 ira joined #gluster
19:53 tom[] joined #gluster
20:04 ira joined #gluster
20:10 ij joined #gluster
20:12 ij Gluster is decentralized? Like plug out the wire for a day and when it connects back, it'll sync?
20:13 ij … with the other node
20:17 jwaibel joined #gluster
20:19 jdossey joined #gluster
20:21 ahino joined #gluster
20:25 pioto joined #gluster
20:33 plarsen joined #gluster
20:33 Gambit15 ij, if you configure quorum & self-healing correctly, yes
20:33 Gambit15 But if you configure it incorrectly, you could very easily end up destroying your data
20:34 ij So it can function like a dropbox?
20:34 ij If the big IF: you've configured it properly — is true.
20:35 Gambit15 Um...dropbox is an interface for users
20:35 Gambit15 http://gluster.readthedocs.io/en/latest/
20:35 glusterbot Title: Gluster Docs (at gluster.readthedocs.io)
20:35 Gambit15 Read it!
20:35 ij I will!
20:37 ij Perhaps that's a bad question. How would conflicts be resolved?
20:43 JoeJulian ij: GlusterFS is a clustered filesystem, not a synchronization tool. I get the impression you're looking for the latter.
20:50 derjohn_mob joined #gluster
20:54 vbellur joined #gluster
20:57 vbellur1 joined #gluster
20:58 vbellur joined #gluster
21:00 vbellur joined #gluster
21:01 vbellur1 joined #gluster
21:02 vbellur joined #gluster
21:03 vbellur1 joined #gluster
21:03 wushudoin| joined #gluster
21:04 vbellur joined #gluster
21:04 jbautista- joined #gluster
21:05 vbellur joined #gluster
21:05 vbellur1 joined #gluster
21:06 vbellur joined #gluster
21:09 anoopcs joined #gluster
21:17 ira joined #gluster
21:24 jbrooks joined #gluster
21:51 vbellur joined #gluster
21:52 xrated joined #gluster
21:57 ic0n joined #gluster
21:58 Acinonyx joined #gluster
22:08 ic0n joined #gluster
22:20 jkroon joined #gluster
22:20 pjrebollo joined #gluster
22:44 baber joined #gluster
22:46 cholcombe joined #gluster
23:11 masber joined #gluster
23:18 MidlandTroy joined #gluster
23:20 major joined #gluster
23:39 wushudoin joined #gluster
23:55 cacasmacas joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary