Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-10-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:24 kimmeh joined #gluster
00:26 telius joined #gluster
00:43 telius joined #gluster
00:53 al joined #gluster
01:10 daMaestro joined #gluster
01:11 kramdoss_ joined #gluster
01:13 plarsen joined #gluster
01:14 shdeng joined #gluster
01:26 bwerthmann joined #gluster
01:26 BitByteNybble110 joined #gluster
01:27 edyesed joined #gluster
01:27 edyesed Hi
01:27 glusterbot edyesed: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
01:28 edyesed I have a gluster thinpool volume whose metadata internal volume is quickly filling up
01:28 edyesed we're running lvm 2.02.98-6 , and we don't appear to be able to resize the metadata volume
01:29 edyesed if anybody has a "here's how to stop the bleeding", I'd be in your debt
01:39 nbalacha joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:04 harish joined #gluster
02:25 kimmeh joined #gluster
02:28 nbalacha joined #gluster
02:42 nbalacha joined #gluster
02:53 magrawal joined #gluster
02:58 kramdoss_ joined #gluster
03:02 RameshN joined #gluster
03:05 Gambit15 joined #gluster
03:14 bwerthmann joined #gluster
03:32 shdeng joined #gluster
03:51 atinm joined #gluster
03:53 bwerthma1n joined #gluster
03:53 _ndevos joined #gluster
03:53 _ndevos joined #gluster
03:58 shubhendu joined #gluster
04:26 kimmeh joined #gluster
04:28 ndarshan joined #gluster
04:28 karthik_us joined #gluster
04:28 msvbhat joined #gluster
04:31 ramky joined #gluster
04:38 bwerthmann joined #gluster
04:48 bwerthmann joined #gluster
04:50 bwerthmann joined #gluster
04:51 bwerthmann joined #gluster
04:54 sanoj joined #gluster
04:55 sanoj_ joined #gluster
05:00 Muthu joined #gluster
05:00 prth joined #gluster
05:07 ramky joined #gluster
05:16 satya4ever joined #gluster
05:20 karnan joined #gluster
05:21 Muthu joined #gluster
05:33 devyani7_ joined #gluster
05:36 hgowtham joined #gluster
05:36 hgowtham joined #gluster
05:36 mhulsman joined #gluster
05:36 dgandhi joined #gluster
05:43 overclk joined #gluster
05:49 prasanth joined #gluster
05:49 kotreshhr joined #gluster
05:52 aravindavk joined #gluster
05:55 dgandhi joined #gluster
05:56 mhulsman joined #gluster
05:59 ankitraj joined #gluster
06:03 jtux joined #gluster
06:20 msvbhat joined #gluster
06:27 kimmeh joined #gluster
06:40 overclk joined #gluster
06:49 Alghost joined #gluster
06:52 jri joined #gluster
06:55 javi404 joined #gluster
07:00 k4n0 joined #gluster
07:03 devyani7_ joined #gluster
07:05 apandey joined #gluster
07:09 ppai joined #gluster
07:12 derjohn_mob joined #gluster
07:12 ivan_rossi joined #gluster
07:13 robb_nl joined #gluster
07:17 jiffin joined #gluster
07:21 abyss^_ JoeJulian: yes, thank you, but reset attributes looks more difficult and I'm not sure that I understand the idea, so clearing bricks seem safer;)
07:22 fsimonce joined #gluster
07:22 Lee1092 joined #gluster
07:28 hchiramm joined #gluster
07:31 Philambdo joined #gluster
07:33 prth joined #gluster
07:35 nbalacha joined #gluster
07:36 k4n0 joined #gluster
07:39 rafi joined #gluster
07:40 arc0life_ joined #gluster
07:41 nbalacha joined #gluster
07:44 robb_nl_ joined #gluster
07:50 robb_nl joined #gluster
07:53 kimmeh joined #gluster
08:02 derjohn_mob joined #gluster
08:06 kxseven joined #gluster
08:15 jtux joined #gluster
08:18 itisravi joined #gluster
08:21 skoduri joined #gluster
08:24 abyss^_ JoeJulian: additionaly I suppose the idea with reset xfr is prone to errors - what will happen when I will make script to changing attributes wrong - for example... :/
08:26 hackman joined #gluster
08:26 kdhananjay joined #gluster
08:29 nishanth joined #gluster
08:29 riyas joined #gluster
08:34 arc0life_ joined #gluster
08:34 Slashman joined #gluster
08:41 karthik_us joined #gluster
08:44 k4n0 joined #gluster
08:47 magrawal joined #gluster
08:49 f0rpaxe joined #gluster
08:54 flying joined #gluster
08:56 [diablo] joined #gluster
08:59 Slashman joined #gluster
09:04 newdave joined #gluster
09:22 flyingX joined #gluster
09:39 msvbhat joined #gluster
09:43 arc0life_ joined #gluster
09:55 devyani7_ joined #gluster
10:10 edong23 joined #gluster
10:10 _nixpanic joined #gluster
10:10 _nixpanic joined #gluster
10:14 msvbhat joined #gluster
10:20 edong23 joined #gluster
10:22 suliba joined #gluster
10:22 newdave joined #gluster
10:27 ppai_ joined #gluster
10:29 edong23 joined #gluster
10:41 MessedUpHare_ joined #gluster
10:51 harish joined #gluster
11:05 mhulsman joined #gluster
11:07 mhulsman1 joined #gluster
11:14 mhulsman joined #gluster
11:27 apandey joined #gluster
11:28 kkeithley Gluster Community meeing in approx 30 minutes in #gluster-meeting
11:29 Muthu joined #gluster
11:32 ira joined #gluster
11:40 Debloper joined #gluster
11:45 apandey joined #gluster
11:47 karthik_us joined #gluster
11:50 robb_nl joined #gluster
11:53 johnmilton joined #gluster
11:54 johnmilton joined #gluster
11:55 edong23 joined #gluster
11:55 mhulsman joined #gluster
11:56 mhulsman1 joined #gluster
12:06 kramdoss_ joined #gluster
12:07 prth joined #gluster
12:11 jdarcy joined #gluster
12:15 k4n0 joined #gluster
12:36 mhulsman joined #gluster
12:37 gem joined #gluster
12:38 mhulsman1 joined #gluster
12:38 prth joined #gluster
12:40 shyam joined #gluster
12:47 shaunm joined #gluster
12:50 bartden joined #gluster
12:51 unclemarc joined #gluster
12:52 bartden hi, can I enforce rsync geo replication, because status is active, but last sync is 15 minutes ago and the CRAWL STATUS is on Changelog Crawl
13:01 Muthu joined #gluster
13:07 msvbhat joined #gluster
13:11 BitByteNybble110 joined #gluster
13:20 arc0life_ joined #gluster
13:20 luizcpg joined #gluster
13:24 squizzi joined #gluster
13:36 shyam left #gluster
13:37 skylar joined #gluster
13:38 luizcpg joined #gluster
13:45 skoduri joined #gluster
13:51 shyam joined #gluster
13:54 pcaruana joined #gluster
13:59 unclemarc joined #gluster
14:03 prth joined #gluster
14:06 ankitraj joined #gluster
14:11 ron-slc joined #gluster
14:16 msvbhat joined #gluster
14:18 flyingX what does it mean "W [socket.c:611:__socket_rwv] 0-management: readv on.... "
14:18 flyingX ?
14:24 unclemarc joined #gluster
14:26 shubhi_l joined #gluster
14:31 scuttle|afk joined #gluster
14:32 flyingX ?
14:35 riyas joined #gluster
14:37 riyas joined #gluster
14:40 derjohn_mob joined #gluster
14:43 farhorizon joined #gluster
14:47 raghu` joined #gluster
15:09 robb_nl joined #gluster
15:09 plarsen joined #gluster
15:11 arc0 joined #gluster
15:19 overclk joined #gluster
15:19 wushudoin joined #gluster
15:23 nbalacha joined #gluster
15:32 derjohn_mob joined #gluster
15:32 hackman joined #gluster
15:37 jiffin joined #gluster
15:38 jri joined #gluster
15:41 legreffier joined #gluster
15:41 legreffier hi
15:41 glusterbot legreffier: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:43 legreffier I never got the point of : "localhost:data-01 on /tmp/gsyncd-aux-mount-Kv8hkb type fuse.glusterfs (rw,allow_other,max_read=131072)"
15:43 legreffier in case of georeplication...
15:44 legreffier and why we can't configure the mountpoint father on that mount, causes mass of trouble here.
15:53 prth joined #gluster
15:54 JoeJulian legreffier: It's just an mkdtemp so the "user of the application can control the directory location by setting the TMPDIR, TEMP or TMP environment variables" according to the tempfile documentation.
15:56 legreffier JoeJulian: like in /etc/default/gluster ?
15:56 legreffier it might mess up some other program if set it system-wide obvisouly
15:58 jiffin joined #gluster
15:59 JoeJulian Well, I would create a service drop-in for glusterd.service to add an Environment line to the [Service] section.
15:59 prth joined #gluster
16:00 shubhi_l joined #gluster
16:00 ankitraj joined #gluster
16:01 JoeJulian unless, of course, glusterd.service already has EnvironmentFile=/etc/default/gluster... I haven't looked.
16:02 JoeJulian Lame... "EnvironmentFile=-@sysconfdir@/sysconfig/glusterd"
16:06 ben948 joined #gluster
16:09 ben948 Trying to set up gluster. Four Raspberry Pi 3 with Jessy-lite. Gluster installed using apt install. 'sudo gluster peer status' shows all see each other after appropriate steps (each reports three peers)
16:10 legreffier JoeJulian: thanks a lot
16:10 JoeJulian legreffier: you're welcome
16:10 legreffier JoeJulian: so if i set it in /etc/sysconfig/glusterd , will this environment be used by gsyncd ?
16:11 JoeJulian It should. It's spawned from systemd so it should clone its environment.
16:11 legreffier JoeJulian: cause gsync ain't one of his child... and we weren't allowed to have 2 replicas staging environment :P
16:11 ben948 Each Raspberry Pi 3 has a USB thumbdrive mounted at /mnt/8g and a subdir /mnt/8g/gv-test. Thumbdrives installed straight out of the box. Format FAT32.
16:11 legreffier (centos6 over here, no systemd)
16:11 JoeJulian bummer
16:12 JoeJulian systemd is awesome.
16:12 legreffier will try anyway, thanks a bunch
16:12 jiffin joined #gluster
16:12 JoeJulian ben948: fat32 does not support extended attributes so it not viable to use as a brick.
16:12 ben948 Trying to create a volume: sudo gluster volume create gv-test replica 4 transport tcp sp3:/mnt/8g/gv-test sp4:/mnt/8g/gv-test sp5:/mnt/8g/gv-test sp6:/mnt/8g/gv-test  volume create: gv-test: failed: Glusterfs is not supported on brick: sp3:/mnt/8g/gv-test.
16:13 JoeJulian s/it/is/
16:13 glusterbot What JoeJulian meant to say was: ben948: fat32 does not support extended attributes so is not viable to use as a brick.
16:13 ben948 Do I *absolutely* need to reformat away from FAT32?
16:14 JoeJulian The only other way I can think of would be to create a file and mount it as a loopback device.
16:14 JoeJulian where that file would have a filesystem that supports extended attributes.
16:15 ben948 JoeJulian: that's what I thought based on a video I half listened to this morning. Thanks
16:15 JoeJulian You're welcome.
16:15 JoeJulian Out of curiosity, why are you hesitant to reformat?
16:15 ben948 I think the pain of setting up a loopback is higher than reformatting the drives over the long run.
16:16 ben948 And maintaining it.
16:27 msvbhat joined #gluster
16:29 kpease joined #gluster
16:30 Gambit15 joined #gluster
16:30 kpease_ joined #gluster
16:31 farhorizon joined #gluster
16:39 plarsen joined #gluster
16:45 jkroon not to mention that performance via loopback on top of fat32 will be rather horrible.
16:45 rwheeler joined #gluster
16:46 JoeJulian I never wanted anyone to think that I meant it was a _good_ idea... ;)
16:46 prth joined #gluster
16:48 jkroon i'd like to understand the use-case for glusterfs on a set of rasberries.
16:48 JoeJulian plex server
16:49 JoeJulian Or, at least, that's *a* possible use case.
16:50 jkroon still, normally when we use things like rasberry it's because we want smaller, stand-alone units that's cheap and nasty and just does a single task.  never considered clustering them.
16:51 skoduri joined #gluster
16:51 sage_ joined #gluster
16:52 ivan_rossi jkroon: testing on the cheap? a technical school here just does this kind of things for edu labs.
16:52 JoeJulian I would use 3 pi-like devices with clustered storage as a top-of-rack management node to offer triple redundancy for config managment, dhcp, repo mirrors, etc.
16:53 jkroon that's actually a very interesting idea
16:54 JoeJulian I wouldn't use Rpi, though, I found some uITX boards I could fit 3 in a 1U case and they have ipmi.
16:54 ivan_rossi howeger the pi is just 100 mbit. three cheap nucs board
16:55 ivan_rossi joejulian: are those supermicro boards?
16:55 JoeJulian No, theirs are too big.
16:56 JoeJulian Oh, excuse me, they're mini-itx.
16:56 JoeJulian ASRock
16:56 JoeJulian www.asrockrack.com
16:57 ivan_rossi i found those: https://www.supermicro.com/products/mo​therboard/Xeon/C236_C232/X11SSH-TF.cfm
16:57 glusterbot Title: X11SSH-TF | Motherboards | Products - Super Micro Computer, Inc. (at www.supermicro.com)
16:57 ivan_rossi but they are micro-atx
16:58 JoeJulian 244mm vs 170mm
16:59 bwerthmann joined #gluster
17:00 JoeJulian See my rough design at https://drive.google.com/file/d/0B4p9a4​V-9wr-NVJCZkQ4UkpNejg/view?usp=sharing (open with draw.io)
17:00 glusterbot Title: Untitled Diagram.html - Google Drive (at drive.google.com)
17:02 ivan_rossi looks cool
17:04 ivan_rossi left #gluster
17:06 jri joined #gluster
17:11 derjohn_mob joined #gluster
17:12 ben948 jkroon, distributed replicas are reliable, Raspberry Pi's are cheap, and the Raspberry Pi 3 is sufficiently powerful that it would have been a not unreasonable machine back when Gluster was first implemented and therefore it might be suitable for a range of trailing edge applications where the future has yet to be distrubted.
17:14 ben948 There are also a lot of places where they would not be appropriate and those are probably where Gluster is focused.
17:14 JoeJulian Because not everything needs to have blazing speed.
17:20 farhorizon joined #gluster
17:24 farhoriz_ joined #gluster
17:30 jkroon JoeJulian, ben948 - just a shame most of us probably work in environments where performance does count for a lot.  i remember having a huge struggle a while back to get php-fpm to work half-decently off of glusterfs backed filesystem.  it just didn't happen, was not predictable enough.
17:30 jkroon was still with 3.3 and i must say performance has improved a lot since then.
17:31 mhulsman joined #gluster
17:32 JoeJulian True, and with md-cache and the notification system that's in place, you can actually set the cache timeout to something ridiculously high and not get stale entries because the servers will now invalidate cache entries which the server knows the client has cached. (it's very cool)
17:32 Lee1092 joined #gluster
17:33 JoeJulian ^ something I learned at the gluster summit last week.
17:33 bwerthma1n joined #gluster
17:35 snehring joined #gluster
17:37 barajasfab joined #gluster
17:41 sage_ joined #gluster
17:42 gem joined #gluster
17:51 cliluw joined #gluster
18:12 abyss^_ JoeJulian: Regarding your yesterday's solution: reseting attributes looks more difficult and I'm not sure that I understand the idea, so clearing bricks seem safer;)
18:13 abyss^_ additionaly I suppose the idea with reseting xfr is prone to errors - what will happen when I will make script to changing attributes wrong - for example... :/ (for example deleting files etc)
18:14 abyss^_ I will only remove directories from list (7k directories), so I hope self-heal should last long?
18:15 abyss^_ If you have any different opinion or advice please tell me :) I will be grateful
18:23 JoeJulian If removing the directories works for you then that's probably the simplest solution.
18:25 raghu` joined #gluster
18:36 abyss^_ JoeJulian: I don't know if it works;) I will check with your script ;) But unfortunately on production environment :(
18:36 jiffin joined #gluster
18:41 JoeJulian @ppa
18:41 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q
18:42 abyss^_ that why I ask a lot;)
18:43 abyss^_ and I am trying to understand the best way for me;)
18:47 JoeJulian Yeah, I understand. I hate when this kind of stuff happens.
18:50 abyss^_ JoeJulian: Thank you :)
18:55 Menaka joined #gluster
19:02 f0rpaxe joined #gluster
19:05 mhulsman joined #gluster
19:16 bwerthma1n kkeithley: is https://bugzilla.redhat.co​m/show_bug.cgi?id=1381339 of interest to you?
19:16 glusterbot Bug 1381339: high, unspecified, ---, bugs, NEW , starting glusterfs-server via init.d results in upstart reporting stop/waiting
19:17 JoeJulian Is *anybody* really interested in anything upstart does? ;)
19:18 kkeithley dunno, should I be interested in it? ;-)
19:19 kkeithley if you've got a fix, add it to the BZ.
19:19 ankitraj joined #gluster
19:20 ic0n joined #gluster
19:34 msvbhat joined #gluster
19:40 ilbot3 joined #gluster
19:40 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
19:46 Menaka left #gluster
19:47 nathwill joined #gluster
19:47 mhulsman joined #gluster
19:49 Philambdo1 joined #gluster
19:51 farhorizon joined #gluster
20:29 unclemarc joined #gluster
20:49 muneerse2 joined #gluster
20:53 nathwill hmm, so our geo-replica seems to have gone into faulty status, with errors like https://gist.github.com/nathwill/​e27dbdec49478d9a200a6965df286a5c. is it possible to force re-sync the geo-replica from the master volume?
20:54 nathwill if i need to i can nuke the slave volume and start over, but if it's possible to do a more light-weight version of that that doesn't require re-sync'ing all the data, that's preferred
20:54 muneerse joined #gluster
20:56 JoeJulian nathwill: I *think* you can just delete and recreate the geo-sync configuration and it *should* only move the data that it needs to. It uses rsync to do the actual transfer.
20:57 nathwill yeah, rsync's my understanding too. alright, we'll find out shortly :D
21:00 nathwill damn, went quickly back to faulty :(. guess i get to nuke 'n pave
21:01 JoeJulian Same error?
21:01 nathwill yeah
21:04 JoeJulian It's some sort of  recursive rmdir
21:04 JoeJulian what version are you running?
21:05 JoeJulian Looks like that part of the code has changed between whatever you're running and master.
21:07 nathwill we're running 3.8.4
21:07 nathwill i saw there was a bug open about some similar issue, but was marked as a dupe, and the parent was marked private in RH BZ
21:08 nathwill 3.8.4 via the centos-storage-sig packages, specifically
21:10 bwerthmann joined #gluster
21:11 Gambit15_ joined #gluster
21:12 Debloper1 joined #gluster
21:14 snehring_ joined #gluster
21:14 pasik_ joined #gluster
21:15 JoeJulian nathwill: looks like there should be a corresponding log entry on the slave.
21:15 JoeJulian It's hard to read this python script. Looks like a C coder wrote it.
21:16 sankarsh` joined #gluster
21:16 kpease joined #gluster
21:17 sloop joined #gluster
21:17 yawkat` joined #gluster
21:18 scuttle` joined #gluster
21:18 _nixpani1 joined #gluster
21:19 _nixpani1 joined #gluster
21:19 Ulrar_ joined #gluster
21:20 Philambdo1 joined #gluster
21:23 sysanthrope joined #gluster
21:24 kxseven joined #gluster
21:25 nathwill so that first snippet i believe was from the slave, i've just added the master-side log error that corresponded to that one
21:25 nathwill as a comment on the gist
21:27 nathwill i may just nuke and re-sync, this is an error during our initial sync, we never actually exited hybrid crawl
21:27 nathwill and i kind of suspect the problem may be related to having had split-brain files during the hybrid crawl
21:28 colm joined #gluster
21:28 Larsen_ joined #gluster
21:28 billputer joined #gluster
21:29 nathwill the master volume is also one we periodically stop and do volume-snapshots on, and i'm thinking that may have screwed up the state during the initial geo-replica sync
21:30 ron-slc joined #gluster
21:30 nathwill since the active node in the geo-replica was also the one we target for offline snapshotting.
21:35 JoeJulian Sounds logical
22:08 johnmilton joined #gluster
22:32 plarsen joined #gluster
22:33 jeremyh joined #gluster
22:49 luizcpg joined #gluster
22:56 plarsen joined #gluster
23:23 Menaka joined #gluster
23:53 Alghost joined #gluster
23:53 Alghost joined #gluster
23:54 Alghost_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary