Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-01-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 wushudoin joined #gluster
00:17 msvbhat joined #gluster
00:28 plarsen joined #gluster
00:28 Klas joined #gluster
00:32 wushudoin joined #gluster
00:37 Wizek_ joined #gluster
00:38 PaulCuzner joined #gluster
00:39 PaulCuzner left #gluster
01:07 shdeng joined #gluster
01:20 wushudoin joined #gluster
01:41 nh2_ joined #gluster
01:41 nh2_ hi, when I run bitrot ondemand, all my files are always in "Number of Skipped files:", why could that be?
01:42 nh2_ I would like to force it to scrub/check all files, because I introduced a corruption on one brick with dd and want to see if gluster detects it
01:49 susant left #gluster
02:02 shortdudey123 joined #gluster
02:02 gem joined #gluster
02:33 jvandewege_ joined #gluster
02:33 john51_ joined #gluster
02:33 l2__ joined #gluster
02:34 nh2_ joined #gluster
02:36 partner_ joined #gluster
02:37 overclk_ joined #gluster
02:37 sage_ joined #gluster
02:40 derjohn_mobi joined #gluster
02:40 sloop- joined #gluster
02:40 lucasrolff_ joined #gluster
02:40 k0nsl_ joined #gluster
02:40 MadPsy_ joined #gluster
02:40 MadPsy_ joined #gluster
02:40 JPau1 joined #gluster
02:41 scc joined #gluster
02:46 sankarshan joined #gluster
02:46 sankarshan joined #gluster
02:46 pocketprotector joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:49 coredumb joined #gluster
02:50 dgandhi joined #gluster
02:50 aronnax joined #gluster
02:50 snehring joined #gluster
02:50 wushudoin joined #gluster
02:51 fus joined #gluster
02:59 magrawal joined #gluster
03:00 Limebyte joined #gluster
03:11 Lee1092 joined #gluster
03:11 anoopcs joined #gluster
03:15 buvanesh_kumar joined #gluster
03:24 riyas joined #gluster
03:38 kramdoss_ joined #gluster
03:48 atinm_ joined #gluster
03:51 Shu6h3ndu joined #gluster
03:51 nbalacha joined #gluster
03:52 Prasad joined #gluster
04:02 RameshN joined #gluster
04:07 Prasad joined #gluster
04:18 bowhunter joined #gluster
04:22 Shu6h3ndu joined #gluster
04:27 Prasad joined #gluster
04:29 msvbhat joined #gluster
04:41 kramdoss_ joined #gluster
04:43 rjoseph joined #gluster
04:45 ashiq joined #gluster
04:55 msvbhat joined #gluster
05:03 ndarshan joined #gluster
05:12 Humble joined #gluster
05:14 newdave joined #gluster
05:25 itisravi joined #gluster
05:35 gem joined #gluster
05:35 sbulage joined #gluster
05:42 jiffin joined #gluster
05:43 aravindavk joined #gluster
05:47 riyas joined #gluster
05:50 kdhananjay joined #gluster
05:50 msvbhat joined #gluster
05:59 apandey joined #gluster
06:04 Karan joined #gluster
06:07 susant joined #gluster
06:17 skoduri joined #gluster
06:18 squizzi joined #gluster
06:18 ppai joined #gluster
06:23 sanoj joined #gluster
06:27 msvbhat joined #gluster
06:28 gyadav joined #gluster
06:32 ahino joined #gluster
06:35 susant joined #gluster
06:41 rafi joined #gluster
06:43 Saravanakmr joined #gluster
06:48 bala_konda joined #gluster
06:49 ankit_ joined #gluster
07:17 mhulsman joined #gluster
07:21 Caveat4U_ joined #gluster
07:22 jtux joined #gluster
07:34 gyadav joined #gluster
07:38 Humble joined #gluster
07:58 unlaudable joined #gluster
08:06 devyani7 joined #gluster
08:25 fsimonce joined #gluster
08:31 Ashutto joined #gluster
08:31 Ashutto Hello
08:31 glusterbot Ashutto: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:32 Ashutto I have a question about volume top (read|write)
08:33 Ashutto i clear the stats (gluster volume top x clear) and then wait a few seconds. Always the top read has the hit count it had before the clear
08:33 Ashutto so is the top write
08:33 Ashutto is that correct?
08:33 Ashutto i'm pretty sure that that specific file has not been read or written so much time in few seconds
08:33 alezzandro joined #gluster
08:34 masber joined #gluster
08:39 sanoj joined #gluster
08:41 jri joined #gluster
08:50 [diablo] joined #gluster
08:50 l2__ joined #gluster
08:54 musa22 joined #gluster
08:55 Debloper joined #gluster
08:56 flying joined #gluster
08:56 _nixpanic joined #gluster
08:57 _nixpanic joined #gluster
08:59 nishanth joined #gluster
09:00 Prasad joined #gluster
09:02 ahino joined #gluster
09:07 jesk joined #gluster
09:08 musa22_ joined #gluster
09:11 ibotty joined #gluster
09:12 k4n0 joined #gluster
09:20 derjohn_mob joined #gluster
09:24 bartden joined #gluster
09:26 bartden Hi, apparently there are logrotate configurations set by default on installing glusterfs on a client. These contain a killall -HUP command for the glusterfs and glusterfsd process, won’t this break a session with a currently mounted system? As we are running processes all the time which access the mounted glusterfs cluster
09:27 Slashman joined #gluster
09:29 ivan_rossi joined #gluster
09:33 rafi1 joined #gluster
09:36 gem joined #gluster
09:43 Prasad joined #gluster
09:46 poornima_ joined #gluster
09:55 msvbhat joined #gluster
09:55 Shu6h3ndu joined #gluster
09:56 hybrid512 joined #gluster
10:01 social joined #gluster
10:08 shyam joined #gluster
10:09 bbooth joined #gluster
10:14 kettlewell joined #gluster
10:15 kettlewell joined #gluster
10:26 musa22 joined #gluster
10:30 musa22 joined #gluster
10:34 bbooth joined #gluster
10:41 masber joined #gluster
10:53 bala_konda left #gluster
10:58 bartden Hi, apparently there are logrotate configurations set by default on installing glusterfs on a client. These contain a killall -HUP command for the glusterfs and glusterfsd process, won’t this break a session with a currently mounted system? As we are running processes all the time which access the mounted glusterfs cluster
10:58 itisravi joined #gluster
10:59 atinmu joined #gluster
11:00 kdhananjay joined #gluster
11:01 pulli joined #gluster
11:05 Seth_Karlo joined #gluster
11:05 pulli joined #gluster
11:08 rafi joined #gluster
11:21 kotreshhr joined #gluster
11:21 Prasad joined #gluster
11:23 skoduri joined #gluster
11:25 Prasad_ joined #gluster
11:26 ashiq joined #gluster
11:27 Prasad_ joined #gluster
11:34 percevalbot joined #gluster
11:35 ira joined #gluster
11:41 buvanesh_kumar joined #gluster
11:42 Shu6h3ndu joined #gluster
11:44 sanoj joined #gluster
11:46 apandey joined #gluster
11:47 Vide joined #gluster
11:54 kotreshhr left #gluster
11:57 rastar joined #gluster
11:57 shyam joined #gluster
12:03 rjoseph joined #gluster
12:11 bbooth joined #gluster
12:12 susant left #gluster
12:12 skoduri joined #gluster
12:20 masber joined #gluster
12:27 Seth_Karlo joined #gluster
12:29 atinmu joined #gluster
12:32 atinm_ joined #gluster
12:37 cloph DK2:re slow webpage: see https://joejulian.name/blog​/dht-misses-are-expensive/ and re current version on centos 6: see pointers in https://download.gluster.org/pub/gluster/g​lusterfs/3.8/LATEST/EPEL.repo/EPEL.README
12:37 glusterbot Title: DHT misses are expensive (at joejulian.name)
12:47 nh2_ joined #gluster
12:48 musa22 joined #gluster
12:51 kkeithley bartden:  kill(all) -HUP doesn't terminate the glusterfs process, it's the unix/linux standard way to make a daemon reread its config
12:52 kdhananjay joined #gluster
12:54 bartden kkeithley depends on how the application reacts signal HUP, so but i do understand then that gluster just reloads the config
12:54 kkeithley that's correcdt
12:54 kkeithley correct
12:58 mpingu joined #gluster
13:00 nishanth joined #gluster
13:08 pdrakeweb joined #gluster
13:14 msvbhat joined #gluster
13:18 ashiq joined #gluster
13:18 shyam joined #gluster
13:22 Saravanakmr joined #gluster
13:27 mb_ joined #gluster
13:34 bbooth joined #gluster
13:37 nbalacha joined #gluster
13:46 prasanth joined #gluster
13:48 Saravanakmr joined #gluster
13:56 nbalacha joined #gluster
13:57 arpu joined #gluster
14:00 musa22 joined #gluster
14:03 Karan joined #gluster
14:09 ashiq joined #gluster
14:12 msvbhat joined #gluster
14:15 nbalacha joined #gluster
14:15 Seth_Kar_ joined #gluster
14:26 ahino1 joined #gluster
14:27 vbellur joined #gluster
14:32 kpease joined #gluster
14:33 msvbhat joined #gluster
14:34 derjohn_mob joined #gluster
14:36 nbalacha joined #gluster
14:36 ivan_rossi left #gluster
14:39 skylar joined #gluster
14:54 bbooth joined #gluster
14:57 nbalacha joined #gluster
15:15 DV__ joined #gluster
15:16 squizzi joined #gluster
15:20 Gambit15 joined #gluster
15:20 saali joined #gluster
15:25 nbalacha joined #gluster
15:27 annettec joined #gluster
15:40 susant joined #gluster
15:40 derjohn_mob joined #gluster
15:41 farhorizon joined #gluster
15:44 Jacob843 joined #gluster
15:47 vbellur joined #gluster
15:50 Seth_Karlo joined #gluster
15:51 kotreshhr joined #gluster
15:56 atinm_ joined #gluster
15:58 sbulage joined #gluster
16:03 nbalacha joined #gluster
16:04 Caveat4U joined #gluster
16:07 wushudoin joined #gluster
16:08 wushudoin joined #gluster
16:10 arpu joined #gluster
16:11 plarsen joined #gluster
16:16 JoeJulian kkeithley: Well, it tells gluster to *try* to reread its config. If there's no change it does nothing.
16:16 R4yTr4cer joined #gluster
16:16 kkeithley indeed
16:28 Slashman joined #gluster
16:36 bluenemo joined #gluster
16:38 ahino joined #gluster
16:39 bbooth joined #gluster
16:46 bbooth joined #gluster
16:46 Shu6h3ndu joined #gluster
16:50 farhorizon joined #gluster
16:51 jdossey joined #gluster
16:54 Wizek joined #gluster
17:02 musa22 joined #gluster
17:07 Vide joined #gluster
17:12 primehaxor joined #gluster
17:19 nishanth joined #gluster
17:20 bowhunter joined #gluster
17:25 rastar joined #gluster
17:28 jkroon joined #gluster
17:35 sanoj joined #gluster
17:35 skoduri joined #gluster
17:37 Vide joined #gluster
17:45 jiffin joined #gluster
18:12 alezzandro joined #gluster
18:21 vbellur joined #gluster
18:23 rastar joined #gluster
18:41 msvbhat joined #gluster
18:44 bowhunter joined #gluster
18:53 Acinonyx joined #gluster
19:00 vbellur joined #gluster
19:06 Acinonyx joined #gluster
19:26 ahino joined #gluster
19:32 k4n0 joined #gluster
19:41 annettec joined #gluster
19:42 annettec joined #gluster
19:43 derjohn_mob joined #gluster
19:46 mb_ joined #gluster
19:51 k4n0 joined #gluster
19:54 flying joined #gluster
19:56 flying how can I check the state of a cluster?
19:58 flying ?
20:00 om2 joined #gluster
20:10 om2 joined #gluster
20:12 primehaxor joined #gluster
20:12 BitByteNybble110 joined #gluster
20:12 msvbhat joined #gluster
20:14 aleksk joined #gluster
20:15 aleksk hello all, having an issue i am sure the fix is simple but it's really eluding me at the moment.. have a 10 node cluster of old desktop machines, and created a disperse-distributed volume from raid0 volumes on each (so 10 bricks, one per host).. one of the hosts, i had to swap the drive on and i screwed up the system and had to reinstall.. having trouble getting the newly reinstalled system back into the pool..
20:16 aleksk can't peer probe from the new server to the other because it says they're already part of a cluster, and i can't peer the new server form existing servers because it says that the host is already in the peer list..
20:16 aleksk i can't detach the peer because it's part of a volume
20:16 aleksk can't do a brick-replace because it says it doesn't know about the host
20:16 aleksk i'm out of ideas :-/
20:17 aleksk s/doesn't know about the host/says the host is not connected'
20:19 aleksk the volume is stopped at the moment
20:19 aleksk i'm using gluster 3.9.0 btw
20:22 aleksk and trying to remove the brick by force tells me: volume remove-brick commit force: failed: Remove brick incorrect brick count of 1 for disperse 5
20:22 aleksk there anything else i can try that i didn't already think of/try ?
20:33 aleksk ok i tried to replace the brick a temp brick from a host already in the cluster.. i could then detach the peer but having trouble re-probing him now.. getting "Probe returned with Transport endpoint is not connected"
20:33 JoeJulian aleksk: Assuming /var/lib/glusterd was wiped, you need to set the server uuid to match what the peers expect it to be in /var/lib/glusterd/glusterd.info. You can get that uuid from one of the peers. Look in /var/lib/glusterd/peers/
20:34 aleksk JoeJulian, i got the uuid just dont know what to do with it
20:34 JoeJulian Or that...
20:35 JoeJulian transport endpoint not connected usually means network or firewall.
20:35 aleksk yes that's what it was
20:35 aleksk i forgot to re-do the firewall rules when i reinstalled the host
20:36 Humble joined #gluster
20:40 aleksk ok i think i'm almost in business... just have to re-heal the volume
20:42 aleksk there anything special when doing a heal? just do the heal before the starting the volume or any special considerations/params to pass to heal ?
20:42 aleksk btw Joe, what happens if i did have the uuid and didn't do the brick-replace ? what would i do with that uuid ?
20:51 aleksk how can i tell if the heal is working.. if i do heal volname statistics it just gives me a bunch of files, doesn't tell me if it's done or what % it's up to or anything ?
20:58 aleksk i don't get this.. Gathering list of healed entries on volume volname has been unsuccessful on bricks that are down. Please check if all brick processes are running
20:58 aleksk not sure why it thinks there might be brick processes not running.. is that not just the glusterd service? how can i check what brick processes might not be running ?
20:59 aleksk i see all the brick process PIDs when i do volume volname status|grep ^Brick
21:00 alezzandro joined #gluster
21:00 aleksk the 'Online' field has 'Y' for every brick
21:08 aleksk anybody? :)
21:09 timotheus1 joined #gluster
21:12 aleksk 21:11:53 up  1:24,  1 user,  load average: 15.63, 9.96, 5.28
21:12 aleksk well host sure is busy doing SOMETHING at least
21:20 JoeJulian aleksk: Sorry, all-hands meetings...
21:21 JoeJulian aleksk: It probably means that they weren't responding in time to the rpc call. Probably because it's too busy healing.
21:22 JoeJulian aleksk: No, there's no indication of what file it's working on or how far through that file it is. You /can/ glean that information from a state dump. Just look at the self-heal locks.
21:22 aleksk ahh ok that makes sense.. host was going really slow.. ended up rebooting for some reason.. that doesn't bode well :-/ i can't tell how far the heal got or if it will pick back up ?
21:23 JoeJulian It will not pick-up mid-file, but it will start that file over again.
21:24 derjohn_mob joined #gluster
21:25 risKArt joined #gluster
21:31 kettlewell When a node is reintroduced to the cluster after being down for some time, it's replica spikes it's CPU utilization during the self-heal process ... is there a way to throttle the number of threads it uses to self-heal?
21:33 foster joined #gluster
21:33 risKArt left #gluster
21:39 aleksk renice? :)
21:39 aleksk funny you should ask.. i just noticed that as well.. shortly before my box rebooted
21:45 JoeJulian cgroups?
21:47 kettlewell I suppose either would work ... I was hopeful for a config entry that would be smart about how many CPU's it was allowed to use  :)
21:54 JoeJulian I know the guys that are working on fair queueing for this. It's actually a pretty hard problem.
21:55 JoeJulian imho, though, if you want to pin cores or limit resources in some way, it should be done as cgroups as part of the systemd service.
21:57 kettlewell yeah, we may need to consider doing that... when we bring a node backonline, we peg out at 100% CPU utilization for half a day...
21:57 bbooth joined #gluster
21:58 jkroon joined #gluster
21:59 kettlewell glad to hear that a solution is in progress to optimize this process ...  queueing problems are never easy...
22:07 aleksk i don't mind the cpu pegped.. what i do mind is if my box reboots lol
22:14 aleksk oh well guess i'll see if it happens again
22:15 aleksk i notice that with 3.9.0, the default is to have nfs.disable set to ON for volumes
22:15 aleksk not only for newly created volumes but for volumes that i had created with 3.7.x (nfs was working before but after upgrading to 3.9.0 it stopped working and it took me a little bit to realize that the default changed
22:16 aleksk not my call to change defaults, but at the same time can't say i'm a fan of not preserving settings from previously generated versions
22:19 Trefex joined #gluster
22:27 JoeJulian aleksk: Good point. Please file a bug report.
22:27 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
22:28 JoeJulian This occurred because the 3.7 volume had no specific setting regarding that as it was enabled by default. The upgrade process should specifically set nfs on if it wasn't set prior to upgrading.
22:30 aleksk set nfs on meaning enable nfs, or set nfs-disable on ?
22:31 aleksk i am not happy with the naming of the setting name either.. it's awkward having a negative setting name
22:32 bbooth joined #gluster
22:32 JoeJulian I agree
22:37 aleksk bug opened: https://bugzilla.redhat.co​m/show_bug.cgi?id=1414998
22:37 glusterbot Bug 1414998: medium, unspecified, ---, bugs, NEW , setting for nfs-disable changed when upgrading from 3.7.x to 3.9.0
22:38 aleksk i would say those error messages i got while trying to bring my newly reinstalled system back online were a "bug" too.. in that they don't really help you troubleshoot/point you to the way to fix the issue.. just kinda saying to you "ha nope not gonna do that/not gonna work to do that"
22:38 aleksk but that's just my opinion :-)
22:38 bowhunter joined #gluster
22:38 aleksk maybe the error messages are SUPPOSED to be unhelpful.. what do i know :-)
22:39 JoeJulian If it doesn't work in a way that is expected, file a bug. Worst that'll happen is they close it or just don't fix it.
22:39 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
23:00 Marbug joined #gluster
23:11 sloop joined #gluster
23:48 bbooth joined #gluster
23:48 aleksk can't remember, is there a way to tell if a heal is done ?
23:50 aleksk guess 'number of entries' will be 0 for all bricks since that's what i looks like i have currently
23:52 vbellur joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary