Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 shyam joined #gluster
00:18 pdrakeweb joined #gluster
00:56 pdrakeweb joined #gluster
01:19 ahino joined #gluster
01:25 shdeng joined #gluster
01:39 ZachLanich joined #gluster
01:46 nishanth joined #gluster
01:56 harish joined #gluster
02:35 telius joined #gluster
02:39 kramdoss_ joined #gluster
02:52 Lee1092 joined #gluster
03:02 Gambit15 joined #gluster
03:05 Gnomethrower joined #gluster
03:21 kukulogy joined #gluster
03:23 kshlm joined #gluster
03:26 kukulogy hello, I'm trying to do some experiment with my peers. I shutdown the gluster03. https://dpaste.de/YPav
03:26 glusterbot Title: dpaste.de: Snippet #377294 (at dpaste.de)
03:27 kukulogy I tried to run replace-brick. https://dpaste.de/YmCz but it fails.
03:27 glusterbot Title: dpaste.de: Snippet #377295 (at dpaste.de)
03:35 ZachLanich joined #gluster
03:42 sanoj joined #gluster
03:50 raghug joined #gluster
03:51 kpease joined #gluster
04:00 atinm joined #gluster
04:02 David_Varghese joined #gluster
04:08 itisravi joined #gluster
04:13 kukulogy nvm guys sorry for the trouble. Just found it in the docs.
04:28 itisravi joined #gluster
04:29 shubhendu joined #gluster
04:42 karthik_ joined #gluster
04:53 ashiq joined #gluster
05:02 kdhananjay joined #gluster
05:04 Alghost joined #gluster
05:13 ndarshan joined #gluster
05:13 RameshN joined #gluster
05:15 Manikandan joined #gluster
05:23 derjohn_mob joined #gluster
05:31 aspandey joined #gluster
05:33 rafi joined #gluster
05:37 nishanth joined #gluster
05:37 ramky joined #gluster
05:37 cliluw joined #gluster
05:42 kotreshhr joined #gluster
05:49 arcolife joined #gluster
05:49 mhulsman joined #gluster
05:55 nthomas joined #gluster
05:57 mhulsman joined #gluster
05:59 Saravanakmr joined #gluster
06:01 Bhaskarakiran joined #gluster
06:10 Saravanakmr kshlm, ping
06:10 glusterbot Saravanakmr: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
06:11 kshlm Saravanakmr, Yeah?
06:11 Saravanakmr kshlm, Please take a look at kshlm, ping - can you have a look at this pull request https://github.com/gluster/glusterdocs/pull/145#discussion_r75449455
06:11 glusterbot Title: Create op_version.md by SaravanaStorageNetwork · Pull Request #145 · gluster/glusterdocs · GitHub (at github.com)
06:12 hgowtham joined #gluster
06:13 msvbhat joined #gluster
06:18 jtux joined #gluster
06:24 karnan joined #gluster
06:27 atalur joined #gluster
06:28 anil joined #gluster
06:30 itisravi joined #gluster
06:30 Alghost joined #gluster
06:31 kaushal_ joined #gluster
06:33 kdhananjay joined #gluster
06:34 ankitraj joined #gluster
06:37 ZachLanich joined #gluster
06:39 rastar joined #gluster
06:44 rafi joined #gluster
06:46 devyani7 joined #gluster
06:46 d0nn1e joined #gluster
06:49 Saravanakmr kshlm++
06:49 glusterbot Saravanakmr: kshlm's karma is now 3
06:50 RameshN joined #gluster
06:54 spalai joined #gluster
06:56 poornima_ joined #gluster
07:01 kukulogy I'm following this doc. https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick I'm stuck to setfattr since its doesn't exists in FreeBSD.
07:01 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.io)
07:02 kxseven joined #gluster
07:04 [diablo] joined #gluster
07:05 arcolife joined #gluster
07:07 satya4ever joined #gluster
07:10 Muthu_ joined #gluster
07:10 Sebbo1 joined #gluster
07:15 pur joined #gluster
07:16 kramdoss_ joined #gluster
07:21 derjohn_mob joined #gluster
07:25 mhulsman1 joined #gluster
07:27 jkroon joined #gluster
07:27 sanoj joined #gluster
07:27 jri joined #gluster
07:36 kdhananjay joined #gluster
07:36 itisravi_ joined #gluster
07:41 siavash joined #gluster
07:52 Pupeno joined #gluster
07:52 Pupeno joined #gluster
07:53 fsimonce joined #gluster
07:54 om joined #gluster
07:54 devyani7 joined #gluster
07:55 Alghost joined #gluster
07:59 mbukatov joined #gluster
08:01 hchiramm joined #gluster
08:02 deniszh joined #gluster
08:17 arcolife joined #gluster
08:17 archit_ joined #gluster
08:26 RameshN joined #gluster
08:26 mhulsman joined #gluster
08:29 Slashman joined #gluster
08:32 itisravi joined #gluster
08:34 aspandey joined #gluster
08:34 kdhananjay joined #gluster
08:37 ivan_rossi joined #gluster
08:37 Alghost joined #gluster
08:38 Alghost_ joined #gluster
08:44 Wojtek joined #gluster
08:45 tg2 joined #gluster
08:47 atalur joined #gluster
08:51 derjohn_mob joined #gluster
08:54 ivan_rossi left #gluster
08:58 kdhananjay joined #gluster
08:58 itisravi joined #gluster
08:58 auzty joined #gluster
09:05 ahino joined #gluster
09:08 atinm joined #gluster
09:19 karthik_ joined #gluster
09:21 atalur joined #gluster
09:26 mhulsman1 joined #gluster
09:45 Pupeno joined #gluster
09:47 hackman joined #gluster
09:57 satya4ever joined #gluster
09:57 mhulsman joined #gluster
10:01 tjikkun JoeJulian: regarding the restarted heal: I could see in the state dumps that the heal increased start=, till it reached the end of the file and then it starts to be random. So my guess is that the file is written to faster then the heal can manage.
10:01 tjikkun I guess I'll just have to stop that VM for a few hours to let the self-heal complete. Then update, then turn on sharding
10:20 kovshenin joined #gluster
10:27 mhulsman1 joined #gluster
10:30 msvbhat joined #gluster
10:45 tjikkun will sharding be done after a file has already been created?
10:51 suliba joined #gluster
10:54 ashiq joined #gluster
10:57 mhulsman joined #gluster
11:06 atinm joined #gluster
11:19 Bhaskarakiran joined #gluster
11:21 Jacob843 joined #gluster
11:22 post-factum tjikkun: no
11:24 itisravi joined #gluster
11:27 deniszh1 joined #gluster
11:28 mhulsman joined #gluster
11:29 shyam joined #gluster
11:34 tjikkun post-factum: ok, is there a way to force sharding on an existing file?
11:35 post-factum tjikkun: cp + rm
11:35 nishanth joined #gluster
11:36 tjikkun post-factum: that's not really the existing file :P So not with gluster heal or rebalance?
11:36 post-factum tjikkun: afaik, no
11:37 tjikkun ok, thanks for the info
11:42 Bhaskarakiran joined #gluster
11:48 deniszh joined #gluster
11:58 chirino joined #gluster
12:06 Gnomethrower joined #gluster
12:10 pur joined #gluster
12:14 johnmilton joined #gluster
12:20 johnmilton joined #gluster
12:25 DV_ joined #gluster
12:44 plarsen joined #gluster
12:45 [diablo] joined #gluster
12:46 DV_ joined #gluster
12:53 kukulogy joined #gluster
12:53 mhulsman1 joined #gluster
12:56 dlambrig joined #gluster
12:57 msvbhat joined #gluster
12:59 mhulsman joined #gluster
12:59 mhulsman1 joined #gluster
13:05 shyam joined #gluster
13:08 nbalacha joined #gluster
13:14 Gnomethrower joined #gluster
13:18 julim joined #gluster
13:26 mhulsman joined #gluster
13:27 jest joined #gluster
13:32 hgowtham joined #gluster
13:39 msvbhat joined #gluster
13:46 kukulogy joined #gluster
13:55 squizzi joined #gluster
13:55 bowhunter joined #gluster
13:55 skylar joined #gluster
13:56 dnunez joined #gluster
13:58 shyam joined #gluster
13:58 atinm joined #gluster
14:02 jordiesteban joined #gluster
14:06 cloph if I create a replica 2 volume on four bricks - with the shards of a given file all end up in the same replica pair?
14:07 alvinstarr joined #gluster
14:14 hagarth joined #gluster
14:23 neofob joined #gluster
14:30 cloph is it right that with replica 2, even when it is a 2x2, failure of a single brick (each brick on different server) will bring the whole volume down/ro?
14:33 cloph (and two answer my other question myself: shards of a single file will not all end up on the same replica pair in a distributed replicated volume)
14:36 msvbhat joined #gluster
14:37 eightyeight of course self-healing is triggered on replicated files, but what about dispersed?
14:38 jkroon cloph, just be sure your replicated bricks aren't on the same machine, meaning if your bricks is A/B/C/D then AB will mirror each other, and CD, so AB should be on different machines, and so should CD.
14:38 cloph jkroon: yeah. that's for sure - but will mean any failure of the host A-B cause a split-brain szenario/force the volume to read-only to prevent split-brain?
14:39 jkroon depends on your quorum settings but I never did play with those.
14:39 cloph or would it be OK to continue and self -heal if e.g host C with brick C goes down, and reappears?
14:39 jkroon so let's assume updates happen against B, once A comes back up auto-heal should replicate the changes.
14:40 cloph as the volume should be used for VM-images, it is important that the volume doesn't go to readonly when one of the hosts goes down
14:40 muneerse joined #gluster
14:41 jkroon cloph, i've been using gluster for other purposes, 5m small files on a single volume, i've rebooted servers etc and never have I had an issue with that.
14:41 jkroon however, what does annoy me at the moment is there is some performance issue with 3.7.14 that's causing me MAJOR headaches.
14:41 cloph yeah, but VMs are very picky when suddenly their writes fail.. :-)
14:42 jkroon so is most systems.
14:42 TvL2386 joined #gluster
14:43 cloph nah, other stuff just has a failed operation and continues to do other stuff, while a VM/its services then will become all malfunctioning..
14:45 cloph eightyeight: I don't think you can call it "self heal" for dispersed files. Doesn't have the redundany info be computed whenever stuff changes?
14:45 eightyeight cloph: are dispersed volumes redundant?
14:46 cloph you better configure them with redundancy, otherwise they make no sense.
14:46 cloph but the redundancy doesn't stem from duplicated copies of the same stuff
14:48 kukulogy joined #gluster
14:48 cloph eightyeight: just make sure you're not mixing it with a distributed volume
14:49 eightyeight i know the difference between distributed and dispersed
14:49 eightyeight will glusterfs automatically heal dispersed volumes?
14:49 Arrfab cloph: I hope you'll not try to use gluster-client for the hypervisor, but rather libgfapi :)
14:49 cloph ok, better be safe than sorry by talking past each other :-)
14:50 cloph Arrfab: unfortunately debian decided not to have a gfapi version of qemu...
14:50 cloph Arrfab: is it so much better than running of a fuse mount?
14:50 Arrfab cloph: yes, as fuse itself is awful :-)
14:50 cloph (and do you have an answer to the one brick of a 2x2 goes down - what happens to the volume?)
14:51 jkroon Arrfab, grr.  and nfs isn't any better.
14:51 Arrfab jkroon: true, reason why I mentioned libgfapi for kvm guests
14:51 jkroon Arrfab, how familiar are you with fuse?
14:51 jkroon i've recently bumped into some sudden performance issues, now on 3.7.14 and recent kernels.
14:52 Arrfab cloph: iirc (from my initial tests) if libgfapi is used, it will write to both (in a distributed/replicated) and will detect one path failing (and disabling it)
14:52 jkroon seeing flock() on gluster mounted filesystems just not completing.
14:53 jkroon seems to relate to flock() on file-descriptors referencing files that was unlinked after open() but before (or during?) flock()
14:53 cloph guess I'll play it safe and have a 2x(2+1) arbiter config then... AB(c)CD(a) - or is this a bad idea?
14:54 cloph Arrfab: does the libgfapi work with sharding enabled?
14:55 Arrfab cloph: not sure about what you mean by "sharding" : can you elaborate ? is that now possible ? (I'm still on 3.6.x)
14:56 cloph sharding splits a single file into a chunks of a configurable size and stores those chunks on the bricks
14:56 Arrfab oh, new feature in 3.7, so haven't tried it
14:56 johnmilton joined #gluster
14:56 shyam joined #gluster
14:56 jkroon yea, i accidentally configured and had to redo the setup ... didn't like it.
14:57 jbrooks joined #gluster
14:57 cloph you mean libgfapi didn't like it or you had other issues with it?
14:57 eightyeight i guess i can always test it
14:57 jkroon other issues ... gluster performance is horrible in general, sharding made it worse.
14:57 jkroon (well, i really should say fuse performance)
14:58 jkroon and we had a boatload of split-brain situations.  could have been fixed in the meantime.
14:58 protoss-arbiter joined #gluster
14:58 cloph oh? I thought it would be benefitial to the syncing of stuff being less cpu-intensive/requiring less locking.
14:58 Arrfab jkroon: true, especially if you try rsync on top of gluster volumes mounted through fuse
14:58 dlambrig joined #gluster
14:59 jkroon Arrfab, at the moment i'm kinda wishing i didn't upgrade to 3.7, based on what I can tell 3.6 performance was much better.
14:59 jkroon but I don't think it's possible to downgrade again.
14:59 cloph 3.8 is out, why not try upgrade ;->
15:00 jkroon no wait, i completely skipped 3.6 on the particular machines where I had issues.
15:00 jkroon now i'm stumped. what else changed?!?  merge history shows that glusterfs was not updated around the time our problems started.
15:00 jkroon Thu Dec 17 20:28:39 2015 >>> sys-cluster/glusterfs-3.7.4 and that worked well until ~ 3rd week in July when suddenly everything went pearshaped.
15:01 jkroon upgrade on 12/8 to 3.7.14 and whilst things have stabilized now we're still seeing sporadic issues (random "lockups").  we traced a bad disk in a raid pair which we thought caused most of the problems and after replacing things certainly got better again, but nowhere near what it used to be.
15:02 jkroon the two servers is now on 4.6.2 and 4.6.4 kernels resp.
15:04 jkroon you have got to be kidding me ... previous kernel was 3.19.5 ... upgraded on 18 Jul 2016 for the first server.
15:05 jkroon guess what I'm doing tonight ... :(
15:07 wushudoin joined #gluster
15:07 spalai joined #gluster
15:14 derjohn_mob joined #gluster
15:16 dlambrig joined #gluster
15:17 David_Varghese joined #gluster
15:22 kenansulayman joined #gluster
15:24 Guest26216 joined #gluster
15:29 hchiramm joined #gluster
15:29 kukulogy joined #gluster
15:30 hagarth joined #gluster
15:33 social joined #gluster
15:33 dlambrig joined #gluster
15:34 ZachLanich joined #gluster
15:38 Gnomethrower joined #gluster
15:40 msvbhat joined #gluster
15:41 Bhaskarakiran joined #gluster
15:44 nathwill joined #gluster
15:45 kotreshhr joined #gluster
15:49 shyam joined #gluster
15:55 hackman joined #gluster
15:55 mhulsman joined #gluster
15:57 dlambrig joined #gluster
15:59 jkroon hi all, again.  i take it a link count on a *file* in .gluster folder (gfid file) implies a problem?
15:59 jkroon stat .glusterfs/31/9a/319a890b-630d-4128-9d17-ca8d4af5c079 ... Links: 1
15:59 mhulsman joined #gluster
16:00 jkroon how do I resolve this.
16:04 dlambrig joined #gluster
16:05 jkroon JoeJulian, you around?
16:05 cliluw joined #gluster
16:05 arcolife joined #gluster
16:05 Norky joined #gluster
16:06 xMopxShell joined #gluster
16:07 lord4163 joined #gluster
16:08 dlambrig joined #gluster
16:09 mlhess joined #gluster
16:16 rafi joined #gluster
16:19 rjoseph joined #gluster
16:20 lalatenduM joined #gluster
16:22 shruti joined #gluster
16:23 sac joined #gluster
16:31 shyam joined #gluster
16:31 pkalever joined #gluster
16:33 JoeJulian jkroon: It means that gfid file has no actual directory entry, but only if it's actually a file. If it's a symlink that's normal.
16:36 jwd joined #gluster
16:47 hagarth joined #gluster
17:06 atalur joined #gluster
17:24 poornima joined #gluster
17:40 kovshenin joined #gluster
17:43 shyam joined #gluster
17:45 msvbhat joined #gluster
17:49 ilbot3 joined #gluster
17:49 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
17:56 dlambrig joined #gluster
17:58 nathwill joined #gluster
18:03 rafi joined #gluster
18:18 rafi joined #gluster
18:23 derjohn_mob joined #gluster
18:44 rafi joined #gluster
18:45 suliba joined #gluster
18:56 dlambrig joined #gluster
18:58 kovsheni_ joined #gluster
19:11 nathwill joined #gluster
19:26 Slashterix joined #gluster
19:28 Slashterix Hello, is it possible to run gluster in an environment where the ssh option PermitRootLogin no is required?
19:28 JoeJulian Sure, all my servers are set that way. To do otherwise would be foolish (imho)
19:34 Slashterix All the directions say you need to set up ssh keys for root to all the servers. How do you get the peers communicating without that?
19:34 Slashterix I'm in a Red Hat environment if that matters. Using the official packages.
19:36 kotreshhr left #gluster
19:46 ashiq joined #gluster
19:50 JoeJulian Slashterix: The official packages are not redhat packages. :P
19:52 JoeJulian glusterd, the management daemon, listens on 24007 for communication. Simply "gluster peer probe $otherserver" to create a trusted pool of servers.
19:53 JoeJulian http://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Quickstart/
19:53 glusterbot Title: Quick start Guide - Gluster Docs (at gluster.readthedocs.io)
19:53 ben453 joined #gluster
19:56 ben453 Hi, I have a memory leak issue with a replica 3 configuration when one of those replicas is an arbiter brick. It looks like the memory for the arbiter brick process is steadily increasing as I use gluster, to the point where the arbiter brick process is using gigabytes of memory. Has anyone seen this before?
19:57 JoeJulian I think that sounds familiar...
19:59 dlambrig joined #gluster
19:59 ben453 I checked out the changelog for what bug fixes went in to version 3.8.2, and it looks like some memory leaks were addressed but upgrading to that version did not fix this issue =/
20:00 JoeJulian Maybe I was dreaming... There are no bugs, present or past, filed against arbiter with the word "leak".
20:01 kukulogy joined #gluster
20:02 JoeJulian post-factum: I don't suppose you checked arbiter for leaks?
20:02 dlambrig joined #gluster
20:12 shyam joined #gluster
20:14 ben453 I'm able to recreate the memory leak issue pretty easily by setting up a replica 3 configuration with the arbiter brick and using "dd" to create 10,000 files, then delete the files, and repeat. After about 2 days of this going on, the memory on the arbiter brick gets to about 2 gigs.
20:15 ben453 I'll send some feelers out to the gluster user list to see if anyone has seen this before
20:26 B21956 joined #gluster
20:35 post-factum JoeJulian: no, haven't used arbiter in production yet
20:36 JoeJulian post-factum: Do you have any tips to share with ben453 about how to write an effective bug report for memory leaks?
20:36 kukulogy joined #gluster
20:40 derjohn_mob joined #gluster
20:41 d0nn1e joined #gluster
20:46 skylar joined #gluster
21:04 mhulsman joined #gluster
21:19 kukulogy joined #gluster
21:23 johnmilton joined #gluster
21:23 johnmilton joined #gluster
21:31 the-me joined #gluster
21:40 Jacob843 joined #gluster
21:43 Slashterix JoeJulian: thanks for the info. I think some directions have the keys just for easier access to each server for mounting and service management.
22:15 jobewan joined #gluster
22:22 Gambit15 joined #gluster
23:09 kukulogy joined #gluster
23:18 plarsen joined #gluster
23:19 sandersr joined #gluster
23:49 kpease joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary