Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 purpleidea :P
00:00 purpleidea still true though
00:01 purpleidea JoeJulian: ++
00:01 glusterbot purpleidea: JoeJulian's karma is now 36
00:01 purpleidea heh
00:01 purpleidea major: one disclaimer: #gluster has this dope IRC bot, where as #mgmtconfig does not. So there are some advantages for not joining us.
00:02 JoeJulian Or advantages for joining, depending on your love of glusterbot.
00:02 purpleidea Having said that, there's lots of room for bot improvement in #mgmtconfig until someone steps in ;)
00:02 major or how much cluster bot loves <---
00:02 glusterbot major: <-'s karma is now -7
00:02 major s/cluster/gluster/
00:02 glusterbot What major meant to say was: or how much gluster bot loves <---
00:03 JoeJulian It's just a minor fork of supybot.
00:03 purpleidea oh wait, does this work:
00:03 purpleidea purpleidex: ++
00:03 glusterbot purpleidea: purpleidex's karma is now 1
00:03 purpleidea s/purpleidex/purpleidea/
00:03 glusterbot What purpleidea meant to say was: purpleidea: ++
00:03 purpleidea doh
00:04 major heh
00:04 purpleidea (you can still get karma by using a 2nd irc account
00:04 JoeJulian JoeJulian++
00:04 glusterbot JoeJulian: Error: You're not allowed to adjust your own karma.
00:04 purpleidea2 purpleidea ++
00:04 purpleidea2 purpleidea: ++
00:04 JoeJulian hehe
00:04 glusterbot purpleidea2: purpleidea's karma is now 8
00:05 purpleidea hacks
00:05 JoeJulian @channelstats
00:05 glusterbot JoeJulian: On #gluster there have been 390212 messages, containing 15024500 characters, 2470646 words, 8892 smileys, and 1250 frowns; 1776 of those messages were ACTIONs.  There have been 177307 joins, 4496 parts, 173200 quits, 29 kicks, 2254 mode changes, and 8 topic changes.  There are currently 238 users and the channel has peaked at 281 users.
00:05 * purpleidea goes back to real coding (and paperwork + sadness)
00:06 major 25 minutes till I get to head to the station
00:06 major what can I get done in 25 minutes..
00:06 JoeJulian 2.5 youtube videos
00:06 major I don't dare test something .. will get distracted trying to fix it
00:06 JoeJulian kittens or puppies
00:07 JoeJulian don't do both. Can't stop at .5.
00:07 major tempted to find an old 3brain video and start singing to it in the office ...
00:09 farhorizon joined #gluster
00:09 JoeJulian Finally... this documentation issue is complete. Now I can get back to real work.
00:09 major real work ..
00:10 major I miss real work..
00:10 major heh
00:29 kramdoss_ joined #gluster
00:47 farhorizon joined #gluster
00:56 msvbhat joined #gluster
01:14 dgandhi joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 susant joined #gluster
02:06 derjohn_mob joined #gluster
02:12 MrAbaddon joined #gluster
02:17 farhorizon joined #gluster
02:23 ankitr joined #gluster
02:28 arpu joined #gluster
02:32 riyas joined #gluster
02:41 kramdoss_ joined #gluster
02:57 papna joined #gluster
03:04 Gambit15 joined #gluster
03:04 atinm joined #gluster
03:23 Jacob843 joined #gluster
03:35 jkroon joined #gluster
03:38 nbalacha joined #gluster
03:40 magrawal joined #gluster
03:48 DV joined #gluster
03:49 itisravi joined #gluster
04:01 musa22 joined #gluster
04:03 dominicpg joined #gluster
04:03 Shu6h3ndu joined #gluster
04:07 atinm joined #gluster
04:12 Larsen_ joined #gluster
04:15 riyas joined #gluster
04:18 ndarshan joined #gluster
04:21 Shu6h3ndu_ joined #gluster
04:33 apandey joined #gluster
04:35 msvbhat joined #gluster
04:35 shdeng joined #gluster
04:38 skumar joined #gluster
04:41 yosafbridge joined #gluster
04:42 GamingUnleashed joined #gluster
04:43 GamingUnleashed could anyone tell me if I could use GlusterFS on a single node with file level redundancy ?
04:44 GamingUnleashed ie) use current HDD's with data as is pooled together
04:45 ankitr joined #gluster
04:45 GamingUnleashed looking for something similar to FlexRAID's RAID-F
04:51 ankitr joined #gluster
04:57 kdhananjay joined #gluster
05:00 jiffin joined #gluster
05:04 Karan joined #gluster
05:12 ppai joined #gluster
05:13 sbulage joined #gluster
05:15 Ramereth joined #gluster
05:16 side_control joined #gluster
05:16 jerrcs_ joined #gluster
05:17 Seth_Karlo joined #gluster
05:17 eryc joined #gluster
05:17 glustin joined #gluster
05:18 cloph_away joined #gluster
05:19 scuttle|afk joined #gluster
05:19 sloop joined #gluster
05:20 prasanth joined #gluster
05:20 karthik_us joined #gluster
05:25 rafi joined #gluster
05:25 sanoj joined #gluster
05:27 bulde joined #gluster
05:29 Saravanakmr joined #gluster
05:31 kotreshhr joined #gluster
05:32 poornima_ joined #gluster
05:35 vbellur joined #gluster
05:36 buvanesh_kumar joined #gluster
05:37 ppai joined #gluster
05:37 Karan joined #gluster
06:05 DV__ joined #gluster
06:08 PotatoGim Hi, Can I attach/detach/expansion both cold/hot tier volume without service stopping?
06:08 rafi1 joined #gluster
06:10 jiffin PotatoGim: AFAIK attach/detach tier is similar(not whole) add/remove brick. In that case u can perform it without stopping the volume service
06:10 jiffin hgowtham nbalacha mlind can give more clarity on this matter
06:11 PotatoGim jiffin: Thanks :)
06:11 ppai joined #gluster
06:20 vinurs joined #gluster
06:21 ankush joined #gluster
06:25 jtux joined #gluster
06:30 msvbhat joined #gluster
06:30 jwd joined #gluster
06:40 jtux joined #gluster
06:41 vbellur joined #gluster
06:43 sona joined #gluster
06:44 jtux left #gluster
06:45 sonal joined #gluster
06:48 ivan_rossi joined #gluster
06:48 ivan_rossi left #gluster
06:58 nishanth joined #gluster
07:04 vinurs joined #gluster
07:14 rafi joined #gluster
07:16 Philambdo joined #gluster
07:17 BuBU29 joined #gluster
07:17 mbukatov joined #gluster
07:19 ppai joined #gluster
07:19 jkroon joined #gluster
07:20 tom[] joined #gluster
07:21 [diablo] joined #gluster
07:22 poornima_ joined #gluster
07:27 Philambdo joined #gluster
07:42 Philambdo joined #gluster
07:48 skoduri joined #gluster
07:51 [diablo] joined #gluster
08:02 rastar joined #gluster
08:06 Philambdo joined #gluster
08:15 kblin morning folks
08:24 jiffin joined #gluster
08:27 prasanth joined #gluster
08:27 derjohn_mob joined #gluster
08:34 Philambdo joined #gluster
08:34 susant joined #gluster
08:34 susant left #gluster
08:38 musa22 joined #gluster
08:38 musa22 joined #gluster
08:40 derjohn_mob joined #gluster
08:40 Klas GamingUnleashed: you can have several bricks on the same server, yes
08:40 Klas not sure what the result would be if data got fragmented/broken on one of them
08:41 Klas and I'm not at all sure why you would'nt just use RAID1
08:42 Wizek_ joined #gluster
08:46 Philambdo joined #gluster
08:47 Philambdo1 joined #gluster
08:47 flying joined #gluster
08:50 karthik_us joined #gluster
09:09 ndarshan|lunch joined #gluster
09:19 riyas joined #gluster
09:24 Seth_Karlo joined #gluster
09:26 BuBU29 joined #gluster
09:26 Seth_Karlo joined #gluster
09:27 jbrooks joined #gluster
09:28 ndarshan joined #gluster
09:28 musa22 joined #gluster
09:29 BuBU29 joined #gluster
09:29 zakharovvi[m] joined #gluster
09:33 rastar joined #gluster
09:46 ankitr joined #gluster
09:50 subscope joined #gluster
09:51 MrAbaddon joined #gluster
09:57 msvbhat joined #gluster
10:06 buvanesh_kumar joined #gluster
10:07 BuBU29 joined #gluster
10:07 aardbolreiziger joined #gluster
10:08 Seth_Karlo joined #gluster
10:10 sonal joined #gluster
10:11 BuBU29 joined #gluster
10:11 sona joined #gluster
10:11 Seth_Kar_ joined #gluster
10:16 anbehl joined #gluster
10:21 BuBU29 joined #gluster
10:21 ankitr joined #gluster
10:23 petan joined #gluster
10:33 kotreshhr left #gluster
10:37 ankitr joined #gluster
10:41 musa22 joined #gluster
10:43 Seth_Karlo joined #gluster
10:45 ankitr joined #gluster
10:49 rastar joined #gluster
10:55 ahino joined #gluster
10:56 rastar joined #gluster
11:02 aardbolreiziger joined #gluster
11:09 jiffin joined #gluster
11:18 ahino joined #gluster
11:20 karthik_us joined #gluster
11:26 vinurs joined #gluster
11:36 jiffin joined #gluster
11:36 purpleidea joined #gluster
11:46 sanoj joined #gluster
11:50 nbalacha joined #gluster
11:51 atinm joined #gluster
11:58 Seth_Karlo joined #gluster
12:02 ira joined #gluster
12:08 MrAbaddon joined #gluster
12:11 Seth_Karlo joined #gluster
12:24 sonal left #gluster
12:27 msvbhat joined #gluster
12:33 rastar joined #gluster
12:33 kramdoss_ joined #gluster
12:34 MrAbaddon joined #gluster
12:42 unclemarc joined #gluster
12:54 shyam joined #gluster
12:57 rastar joined #gluster
13:07 sona joined #gluster
13:17 skylar joined #gluster
13:23 musa22 joined #gluster
13:27 msvbhat joined #gluster
13:28 nbalacha joined #gluster
13:30 squizzi joined #gluster
13:31 buvanesh_kumar joined #gluster
13:43 ira joined #gluster
14:04 kdhananjay joined #gluster
14:05 ira joined #gluster
14:05 Acinonyx joined #gluster
14:07 musa22 joined #gluster
14:09 nbalacha joined #gluster
14:14 kblin morning folks
14:19 kblin I've got a bunch of glusterfs and glusterfsd processes running that are ignoring my "/etc/init.d/glusterfs-server stop"
14:19 kblin what's the best way of actually stopping them?
14:19 cloph_away the "volume info provider" is different from the "brick processes"
14:20 cloph_away so likley that script is only dealing with the main gluster process, but not the bricks
14:21 cloph_away might be separate one, otherwise just kill manually (or use a helper script like https://github.com/gluster/glusterfs/blob/master/extras/stop-all-gluster-processes.sh
14:21 glusterbot Title: glusterfs/stop-all-gluster-processes.sh at master · gluster/glusterfs · GitHub (at github.com)
14:21 kblin fair enough. so how do I stop them? kill -2 <pid> ?
14:22 kblin there's quite a number of self-heals in progress, which I'm aware of
14:23 kblin but I need to update from 3.5 to a current version, and I need to get the cluster overall back up soonish, so I'd rather continue the self-heals after the upgrade
14:24 cloph_away if you're sure the quorum is met even without those bricks, then feel free to kill them ( TERM signal should do)
14:27 kblin there's no clients reading from the volume right now
14:27 kblin so I'm happy to take down all the things
14:27 susant joined #gluster
14:28 skoduri joined #gluster
14:29 cloph_away oh, then do gluster volume stop first to take the whole volume down.
14:29 cloph_away then there should be no brick processes anymore
14:30 kblin oh, ok
14:33 kblin hm, I think I should be up, but when checking on the further progress of the self-heal, I'm a bit confused about the new gluster volume heal command
14:33 kblin it should be "gluster volume heal myvolume info" to show the info, right?
14:39 cloph yes
14:39 kblin 0-gfapi: failed to get the 'volume file' from server [No such file or directory]
14:39 kblin hm, that doesn't sound too good
14:39 cloph it should list all bricks and their status (connected or not) and the number of entries..
14:39 cloph you did restart the gluster server, didn't you?
14:40 kdhananjay1 joined #gluster
14:41 kblin yes
14:42 cloph and regular gluster volume info works?
14:42 kblin yup
14:42 kblin I see all the bricks I expect
14:43 cloph interesting, then it should have the "volfile" aka the info of the volume/which bricks are part of it,etc
14:44 kblin ok, so I just did a "gluster volume heal myvol enable", and now info again prints out loads of stuff as expected
14:45 kblin but the error message I had before looked weird. not like it was saying "you didn't start the self heal, stupid"
14:46 kblin but something like "there's no such volume 'myvol'"
14:46 kblin don't have the exact wording anymore, lost it in the thousands of lines printed by the heal info now
14:47 kblin and I'm still getting "Launching heal operation to perform index self heal on volume myvol has been unsuccessful on bricks that are down. Please check if all brick processes are running."
14:48 kblin but I'm not aware of any bricks being down
14:49 flying joined #gluster
14:50 kblin gluster volume status certainly lists all bricks as being up
14:51 kblin only one of my three servers is running a self-heal daemon, is that expected?
14:52 kblin also, there's basically no disk IO, so I don't think there's a running self-heal
14:54 sona joined #gluster
15:00 kblin hmm
15:01 kblin oh, I also can't mount the volume from my client anymore
15:04 shyam joined #gluster
15:05 kblin hm, all glusterds are now listening to all interfaces, not just the internal ones
15:13 wushudoin joined #gluster
15:22 farhorizon joined #gluster
15:25 musa22 joined #gluster
15:30 sanoj joined #gluster
15:32 jkroon joined #gluster
15:34 nbalacha joined #gluster
15:34 kblin I guess upgrading was a mistake
15:37 sanoj joined #gluster
15:46 atinm joined #gluster
15:47 vinurs joined #gluster
15:47 kblin I see there's a report about auth.allow changing between 3.9 and 3.10, but auth.allow is set to * for my setup and my client still can't connect
15:49 d4n13L yeah
15:49 d4n13L doesnt matter
15:49 d4n13L doesnt work -.-
15:49 d4n13L ran into the same issues
15:49 d4n13L it's comparing some random bytes to the list
15:49 d4n13L even if it's empty
15:50 d4n13L rolled back the the previous version in the end and waiting for the new patched version
15:54 kblin oh, ok
15:54 kblin crud
15:59 cholcombe joined #gluster
16:04 farhorizon joined #gluster
16:05 farhorizon joined #gluster
16:05 kblin d4n13L: thanks for that tip, 3.9.1 works just fine
16:05 kblin phew
16:08 farhorizon joined #gluster
16:09 rwheeler joined #gluster
16:24 Gambit15 joined #gluster
16:28 buvanesh_kumar joined #gluster
16:30 Guest5289 hum, no with 3.10 and allow as '*' it works
16:30 Guest5289 it's only when non-'*' that it starts compare and fails
16:31 rastar joined #gluster
16:36 nishanth joined #gluster
16:43 MrAbaddon joined #gluster
16:48 dfs-gfs joined #gluster
16:49 ankitr joined #gluster
16:50 dfs-gfs hi all, i have a question healing not repairing an out of sync directory. I have a replicated volume with count of 3. I have set the quorum options appropriately. When I run gluster vol heal, the directory is not replicated as expected
16:51 dfs-gfs here's the directory extended attrs
16:51 dfs-gfs here's the extended attrs on the direcotry
16:51 dfs-gfs https://gist.github.com/jaloren/ff2d54ab558f4dff447cc54bb276300f
16:51 glusterbot Title: gist:ff2d54ab558f4dff447cc54bb276300f · GitHub (at gist.github.com)
16:52 dfs-gfs split brain info says there is no split brain issue
16:52 dfs-gfs so i don't understand why its not self healing
16:58 rastar joined #gluster
17:02 ira joined #gluster
17:04 jkroon joined #gluster
17:07 msvbhat joined #gluster
17:09 nathwill joined #gluster
17:14 mlg9000 Hi, new to gluster... what's the recommended way to do a rolling upgrade between minor versions?
17:14 JoeJulian d4n13L: Has there been a bug report for that?
17:16 JoeJulian mlg9000: http://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/
17:16 glusterbot Title: Upgrade to 3.10 - Gluster Docs (at gluster.readthedocs.io)
17:21 mlg9000 JoeJulian: I just did that on a 3.8.9 to 3.8.10 upgrade, which ended up mostly working except for one volume that reported all the bricks down on the upgraded system (there were processes)  there was some sort of transaction lock issue... I ended up having to restart glusterd on my other nodes to fix.  I'd expected that to break the volume as I didn't have enough for quorum but it was ok....
17:21 MrAbaddon joined #gluster
17:23 JoeJulian That's still the recommended way. I don't suppose you were able to file a bug report on your experience?
17:23 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:33 moneylotion joined #gluster
17:34 derjohn_mob joined #gluster
17:35 kpease joined #gluster
17:37 Wizek_ joined #gluster
17:56 jarbod_ joined #gluster
17:57 nigelb joined #gluster
17:59 thatgraemeguy joined #gluster
17:59 thatgraemeguy joined #gluster
17:59 susant left #gluster
18:00 arif-ali joined #gluster
18:14 Gambit15 From the reports of 3.8.10 that I've seen so far, I'm guessing it's not yet considered stable?
18:23 bwerthmann joined #gluster
18:28 shyam Gambit15: are you referring to the VM corruption on 3.8.10? That was a case that had a few fixes go in, but looks like it did not solve the problem entirely. So that does not take away stability from 3.8.10, rather it failed to fix a problem identified earlier (which exists in 3.8.9, or earlier, as well)
18:29 shyam Or put another way, that was not introduced in 3.8.10, it pre-existed
18:36 bwerthmann We are upgrading from 3.7.8 to 3.7.20 and do a 'gluster volume set all cluster.op-version 30712' at the end of the upgrade. I noticed that '/var/lib/glusterd/vols/<vol>/info' contains 'op-version=30700'. 'gluster volume get <vol> op-version' reports 'cluster.op-version 30712'. A state dump of glusterd shows 'glusterd.current-op-version=30712'. Is this expected?
18:41 thatgraemeguy joined #gluster
18:41 thatgraemeguy joined #gluster
19:15 shutupsquare joined #gluster
19:16 anbehl joined #gluster
19:22 jwd joined #gluster
19:31 jiffin joined #gluster
19:33 ira joined #gluster
19:45 subscope joined #gluster
20:00 ankush joined #gluster
20:09 ic0n joined #gluster
20:17 purpleidea joined #gluster
20:17 purpleidea joined #gluster
20:29 steveeJ_ joined #gluster
20:34 sadbox joined #gluster
20:53 musa22 joined #gluster
21:05 Gambit15 joined #gluster
21:09 farhorizon joined #gluster
21:18 farhorizon joined #gluster
21:19 shyam joined #gluster
21:20 musa22_ joined #gluster
21:21 farhorizon joined #gluster
21:32 oajs joined #gluster
22:37 musa22 joined #gluster
22:51 musa22 joined #gluster
22:53 musa22_ joined #gluster
23:01 purpleidea joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary