Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-12-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 21WAANDO8 joined #gluster
00:11 gbox Hi Happy Holidays!  I moved a 60GB VirtualBox VM to a gluster volume and it seems stuck booting up.  Are there any considerations or tricks to using gluster for VM storage?
00:12 JoeJulian There are settings for efficiency, but it should just work.
00:12 zhangjn joined #gluster
00:14 gbox Thanks Joe that's what I thought.  It could be a problem with the VM because I upgraded as well as moved it.  I've seen a few efficiency notes around the web but are there any big ones off the top of your head?
00:16 gbox glusterfs is grinding away with just this one VBox file (80GB actually)
00:17 JoeJulian vbox itself is pretty damned inefficient and can't use libgfapi, which avoids quite a bit of context switching.
00:20 gbox With the optimal setup it's great, and I still use vagrant.  Thanks again!
00:20 JoeJulian I use vagrant with libvirt. Much better.
00:58 zhangjn joined #gluster
00:59 EinstCrazy joined #gluster
01:00 ninkotech joined #gluster
01:01 ninkotech_ joined #gluster
01:03 haomaiwang joined #gluster
01:28 zhangjn joined #gluster
01:47 daMaestro joined #gluster
01:51 18WABNY9X joined #gluster
02:01 haomaiwa_ joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:00 nangthang joined #gluster
03:01 5EXAAL7BL joined #gluster
03:11 zhangjn joined #gluster
03:15 calavera joined #gluster
03:15 mlncn joined #gluster
03:25 Peppaq joined #gluster
03:31 ninkotech_ joined #gluster
03:40 arcolife joined #gluster
03:41 arcolife joined #gluster
03:44 atinm joined #gluster
03:48 sakshi joined #gluster
03:48 nishanth joined #gluster
03:53 RameshN joined #gluster
03:54 kanagaraj joined #gluster
03:56 nbalacha joined #gluster
03:57 shubhendu joined #gluster
04:01 7GHABV703 joined #gluster
04:14 poornimag joined #gluster
04:15 Manikandan joined #gluster
04:17 pppp joined #gluster
04:24 zhangjn joined #gluster
04:25 vmallika joined #gluster
04:31 kshlm joined #gluster
04:37 gowtham joined #gluster
04:38 RameshN joined #gluster
04:40 gowtham joined #gluster
05:01 haomaiwa_ joined #gluster
05:14 skoduri joined #gluster
05:15 ndarshan joined #gluster
05:17 gem joined #gluster
05:20 skoduri joined #gluster
05:20 rafi joined #gluster
05:21 aravindavk joined #gluster
05:24 dusmant joined #gluster
05:28 overclk joined #gluster
05:28 zhangjn joined #gluster
05:28 gem joined #gluster
05:30 drue joined #gluster
05:30 atalur joined #gluster
05:33 kotreshhr joined #gluster
05:34 Bhaskarakiran joined #gluster
05:34 pppp joined #gluster
05:44 kshlm joined #gluster
05:47 anilshah joined #gluster
05:56 RedW joined #gluster
06:01 haomaiwa_ joined #gluster
06:06 DV__ joined #gluster
06:06 nishanth joined #gluster
06:09 kshlm joined #gluster
06:11 vimal joined #gluster
06:15 R0ok_ joined #gluster
06:19 haomaiwa_ joined #gluster
06:20 karnan joined #gluster
06:26 karnan_ joined #gluster
06:31 DV__ joined #gluster
06:32 kshlm joined #gluster
06:39 rafi1 joined #gluster
06:42 aravindavk joined #gluster
06:52 nbalacha joined #gluster
06:53 edong23 joined #gluster
06:58 ramky joined #gluster
07:00 Saravana_ joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 kshlm joined #gluster
07:04 nishanth joined #gluster
07:06 Apeksha joined #gluster
07:09 Humble joined #gluster
07:15 jtux joined #gluster
07:20 vimal joined #gluster
07:26 F2Knight joined #gluster
07:28 rafi joined #gluster
07:29 daMaestro joined #gluster
07:37 TvL2386 joined #gluster
07:46 nbalacha joined #gluster
08:02 haomaiwang joined #gluster
08:08 kshlm joined #gluster
08:34 fsimonce joined #gluster
08:38 Vaelatern joined #gluster
08:44 atinm joined #gluster
08:58 zhangjn joined #gluster
09:01 haomaiwa_ joined #gluster
09:02 ahino joined #gluster
09:10 sadbox joined #gluster
09:15 paratai_ joined #gluster
09:30 zhangjn joined #gluster
09:32 mhulsman joined #gluster
09:44 sponge joined #gluster
09:51 shubhendu joined #gluster
10:00 zhangjn joined #gluster
10:01 haomaiwa_ joined #gluster
10:07 Manikandan joined #gluster
10:15 Manikandan joined #gluster
10:25 arcolife joined #gluster
10:37 Pupeno joined #gluster
10:45 mhulsman joined #gluster
10:48 zhangjn joined #gluster
10:58 EinstCrazy joined #gluster
10:59 atinm joined #gluster
11:01 haomaiwang joined #gluster
11:05 Manikandan joined #gluster
11:12 shubhendu joined #gluster
11:16 nangthang joined #gluster
11:25 Vaizki joined #gluster
11:45 zhangjn joined #gluster
11:55 kotreshhr left #gluster
11:56 rafi joined #gluster
12:01 77CAAFFGN joined #gluster
12:29 mhulsman joined #gluster
12:39 atalur joined #gluster
12:49 14WAAJMLU joined #gluster
13:01 haomaiwang joined #gluster
13:03 atalur_ joined #gluster
13:27 nangthang joined #gluster
13:28 unclemarc joined #gluster
13:29 Bhaskarakiran joined #gluster
13:40 mlncn joined #gluster
13:50 rafi joined #gluster
13:55 haomaiwang joined #gluster
13:56 nbalacha joined #gluster
13:58 21WAANIL2 joined #gluster
14:01 haomaiwa_ joined #gluster
14:04 rafi joined #gluster
14:07 jwd joined #gluster
14:12 rafi joined #gluster
14:17 haomaiwang joined #gluster
14:26 illogik joined #gluster
14:56 hamiller joined #gluster
15:01 6A4ABHADS joined #gluster
15:09 mhulsman joined #gluster
15:18 hagarth joined #gluster
15:19 jrm16020 joined #gluster
15:37 vmallika joined #gluster
15:38 jackdpeterson joined #gluster
15:41 jackdpeterson hello all, recently experienced numerous fuse clients dropping a glusterFS (3.7) connection. One of the three gluster instances had a hung PID file, and another one I'm looking at is complaining as follows: 2015-12-30 15:39:53.035377] I [MSGID: 106006] [glusterd-svc-mgmt.c:323:glusterd_svc_common_rp
15:41 jackdpeterson W [socket.c:588:__socket_rwv] 0-nfs: readv on /var/run/gluster/e85b68e38b7​a183db70449f797f7bbb0.socket failed (Invalid argument)
15:42 jackdpeterson The one that had the hung PID file had glusterd process running, and the pidfile was there. Root partition wasn't full or anything. A reboot resolved that issue; however, I'm unclear why that was the case.
15:46 ndk joined #gluster
15:48 portante joined #gluster
15:52 farhorizon joined #gluster
15:53 MessedUpHare joined #gluster
16:09 calavera joined #gluster
16:13 64MAANHBP joined #gluster
16:20 mlncn joined #gluster
16:25 RameshN joined #gluster
16:32 jwaibel joined #gluster
16:49 harish_ joined #gluster
16:51 avn joined #gluster
16:56 avn Folks, I have test cluster from N nodes, so which replicaton factor I need to survive outage of M nodes in it?
17:20 coredump joined #gluster
17:38 plarsen joined #gluster
17:45 nishanth joined #gluster
18:01 ahino joined #gluster
18:04 Rapture joined #gluster
18:08 mlncn_ joined #gluster
18:10 dlambrig joined #gluster
18:25 JoeJulian avn: The short answer is M+1. The true answer depends on SLA, MTBF and MTTR.
18:27 illogik joined #gluster
18:33 farhorizon joined #gluster
18:48 avn JoeJulian: well, let begin from beginning ;) I have test cluster from 3 nodes, with replicate volumes. When I reboot my cluster (one  by one in random order) -- I see curious thing: /mnt/volume unmounted in few seconds after mount
18:49 avn With cryptic message: `[2015-12-30 15:19:00.118882] W [glusterfsd.c:1236:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7df5) [0x7fda10803df5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7fda11e6e855] -->/
18:49 glusterbot avn: ('s karma is now -120
18:49 avn usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7fda11e6e6d9] ) 0-: received signum (15), shutting down`
18:49 avn (can put full log to gist, it if help)
18:51 JoeJulian Not really. The problem stems from that "received signum (15)". Something's killing glusterfs. That's external to any gluster tools.
18:53 avn JoeJulian: any suggestion how to figure it out? Is fresh installed VMs, with packages from http://download.gluster.org/pub/gluster/glusterfs​/3.7/LATEST/EPEL.repo/epel-$releasever/$basearch/
18:53 haomaiwang joined #gluster
18:55 avn I know, that is something during boot-up process, because manually mounted -- it not disappears
18:56 JoeJulian Ah, I thought this was existing mounts, not a mount on boot. That makes a lot more sense.
18:57 JoeJulian Do you have the _netdev option in fstab?
18:58 avn yes
18:58 avn I even try to write .service, which  retry to mount it, and do `ls /mnt/volume` when it happens
18:59 avn and I see output of ls with files in journalctl, but when I log to machine -- it already unmounted
18:59 JoeJulian That ls shows files?
18:59 avn yes (single file ;) I copy /etc/passwd there for tests )
19:00 JoeJulian Well that shoots down my next theory.
19:05 F2Knight joined #gluster
19:07 avn JoeJulian: https://gist.github.com/9d07f3094629fdc0cc57 full log from last boot attempt
19:07 glusterbot Title: mnt-container-volumes.log · GitHub (at gist.github.com)
19:08 JoeJulian https://gist.github.com/avnik/9d07f3094629fdc0​cc57#file-mnt-container-volumes-log-L124-L127
19:08 glusterbot Title: mnt-container-volumes.log · GitHub (at gist.github.com)
19:10 avn Looks like root of problems. I fight with same issue on serverside few days ago
19:12 JoeJulian So consul's not running or not providing an address for that service, afaict.
19:14 avn JoeJulian:  haven't quorum atm of boot I think. we resolve about localhost's instance of consul
19:16 JoeJulian Only idea I can think of is a service that waits until consul has quorum installed to network-online.target.
19:16 avn Add x-systemd.requires=consul.service,​x-systemd.requires=dnsmasq.service to mount options
19:16 avn (if it not help, will remove fstab entry, and put .mount unit)
19:17 JoeJulian Nice! I learned something today. Thanks!
19:18 avn I never tried x-systemd stuff before, just read about them ;)
19:18 JoeJulian Link?
19:19 avn man:systemd.mount
19:19 avn and for server side I used following .service -- https://github.com/CiscoCloud/micros​ervices-infrastructure/commit/8a7a14​a41912d74c57b5ec5776a20a69aa8b5388
19:19 glusterbot Title: Provide own glusterd.service · CiscoCloud/microservices-infrastructure@8a7a14a · GitHub (at github.com)
19:20 avn was unable to reach same effect with drop-ins
19:22 avn JoeJulian: my cluster in reboot, so I will know results in ~25 min
19:38 dlambrig joined #gluster
19:50 MACscr is gluster cpu or memory intensive?
19:57 JoeJulian Can be, depending on what it's doing. Self-heal does hash comparisons of segments of files which can be pretty cpu intensive. And caching uses memory.
19:57 JoeJulian glusterbot: oldbug 971
19:58 JoeJulian Hrm... I wonder what happened to that feature.
19:58 MACscr JoeJulian: i am just looking into running a gluster cluster on small arm sbc's
19:58 JoeJulian Cool. I've heard of people doing that.
20:07 JoeJulian hagarth: Any recollection why 2b0299da ? And why that's never been uncommented? People keep finding the command on google and then asking why it doesn't work.
20:08 JoeJulian (because, clearly, everything you can find on google should work in real life)
20:10 DV__ joined #gluster
20:14 hagarth JoeJulian: we never had a real usecase for volume renames then
20:24 JoeJulian Heh, I hadn't noticed there was no code associated with that.
20:30 hagarth JoeJulian: right, cli has code but nothing in glusterd.
20:31 hagarth JoeJulian: it is not difficult to implement now if there is a need.
20:38 JoeJulian hagarth: no clue, it's just the second time in a month I've been asked about why it doesn't work. Third time total.
20:40 JoeJulian hagarth: I'm looking at how the cli parses commands. Does it actually parse the same strings from volume_cmds and it's ilk that's displayed as part of the help text?
20:40 calavera joined #gluster
20:43 hagarth JoeJulian: right, it is basically a bunch of string compares
20:43 JoeJulian That's pretty smart.
20:49 nage joined #gluster
20:50 onebree joined #gluster
20:50 onebree Hello, everyone
20:50 JoeJulian On behalf of everyone I return your greeting.
20:51 papamoose joined #gluster
20:51 onebree Thank ypu
20:52 onebree I have been working on this on/off for a few months. Is there a sure-fire way to get rsync working with glisterfs? From what I gather, the two programs create duplicates or lost files when interacting together
20:53 JoeJulian It's actually quite easy. GlusterFS is a filesystem. You mount it and rsync (or whatever else) to it.
20:53 JoeJulian What you *don't* do is write to the bricks that GlusterFS is using for *its* storage.
20:54 onebree Is there a tutorial or something I could look at? I have not personally used both programs together. But my boss said that the two of them together gets crazy
20:56 JoeJulian https://gluster.readthedocs.org/en/latest/Administ​rator%20Guide/Setting%20Up%20Clients/#manual-mount
20:56 glusterbot Title: Setting Up Clients - Gluster Docs (at gluster.readthedocs.org)
20:56 JoeJulian That's how you mount the volume.
20:56 onebree Okay, thank you.
20:56 onebree And that will work with rsync?
20:57 JoeJulian Yep, though I recommend using the --inplace option for optimal performance.
20:57 JoeJulian If your boss has already written directly to the bricks, your volume may be all screwed up.
20:58 onebree Why do you not want to write directly to the bricks?
20:58 onebree Also, what is `--inplace` option for? Gluster or rsync?
20:58 glusterbot onebree: `'s karma is now -1
20:58 onebree Sorry if I did anything wrong.
20:58 JoeJulian Same reason you don't dd to the middle of your hard drive and expect ext4 to figure out what to do with whatever you wrote there.
20:58 JoeJulian Heh, no, that's just a regex match fail.
20:59 JoeJulian `++
20:59 glusterbot JoeJulian: `'s karma is now 0
20:59 onebree Oh, okay :-)
20:59 onebree `++
20:59 glusterbot onebree: `'s karma is now 1
20:59 JoeJulian inplace is a rsync option. It avoids creating a temporary filename during the copy.
21:02 onebree Okay. Looking at the manual mount link you sent, our internal wiki directions show the same thing:
21:02 onebree mkdir MOUNT_POINT; mount -t glusterfs SERVER_IN_POOL:/VOLUME_NAME MOUNT_POINT    ====> give access to gluster data on client
21:03 JoeJulian Then it should work fine. What problem are you experiencing?
21:04 onebree Honestly, I do not remember :-P I started this project in July or so, and have worked very little on it since August. I will need to refresh my memory as to why I am even doing this haha
21:06 JoeJulian Heh, sure thing. I'm usually around during work hours GMT-8 if you run in to a problem.
21:07 onebree Thank you. I am -5 GMT, so most of our hours align.
21:09 onebree We currently use Gluster 3.5. Have there been any significant changes that would lead you to recommend upgrading?
21:09 onebree ^ version 3.5.3
21:13 JoeJulian Yes, but I can't think off the top of my head which ones made me switch. Wait until next week and jump to 3.7.7, imho.
21:14 onebree Okay. This is ultimately not my decision, I was just wondering what major changes were present
21:14 JoeJulian Oh, I know. The thing that got my attention was the work done to rebalance. It actually works now.
21:15 onebree What do you mean by "rebalance" and "actually works now" ?
21:16 JoeJulian @lucky gluster rebalance
21:16 glusterbot JoeJulian: http://www.gluster.org/community/documentatio​n/index.php/Gluster_3.2:_Rebalancing_Volumes
21:17 JoeJulian That's not lucky... sheesh.
21:17 JoeJulian But the essence is the same, so go ahead and read that.
21:18 JoeJulian Prior to 3.7, I rarely was able to get a rebalance to complete. That doesn't seem to be a problem anymore.
21:18 avn JoeJulian: btw, does glusterfs.fuse rely on resolving only at mount time, or will die as soon as dns go down?
21:19 JoeJulian It will be resilient if dns fails, unless some other connection failure happens and it needs to reconnect.
21:20 shaunm joined #gluster
21:21 avn JoeJulian: so if server to which we currently connect going down, it should be resolvable in time of reconnect?
21:22 JoeJulian @mount server
21:22 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
21:22 JoeJulian So each of the brick servers must be resolvable.
21:25 avn so if `option remote-host` in .vol  will be dotted ip, not symbolic it will be more stable?
21:25 JoeJulian Until you need to change your network.
21:25 jwd joined #gluster
21:25 JoeJulian imho, if you don't have stable dns, you don't have a stable network.
21:26 avn .service.consul depends from  consul quorum
21:26 avn and as I see -- if connect failed -- it retried, if gethostbyname() failed,  glusterfs.fuse simply dies
21:26 JoeJulian So you have to wait until it's in quorum.
21:27 avn Thinking about it, if it help
21:27 avn anyway it not explain, why my mount-monitor.sh script  able to mount and `ls` it, but in few seconds mount disappears
21:28 onebree Is there a way to tell rsync to ignore directories following a regex?
21:28 onebree (I know this is not an rsync room, but the only possible one is stale)
21:29 avn ofc I can adjust my script to check mountpoint each 10 sec, and mount if it not mounted (and do it forever ;))
21:29 JoeJulian avn: might be able to use systemd's automount for that?
21:30 avn maybe, I am not so familiar with automount
21:30 avn (tbh I am also not so familiar with gluster, I got troublesome legacy ansible role which I try to fix
21:33 MugginsM joined #gluster
21:34 JoeJulian avn: What about disabling automount in fstab and instead using a consul watch to mount the volume after the service comes available?
21:35 JoeJulian ie. consul watch -type service -name gluster "/bin/mount /mnt/mymount"
21:43 haomaiwang joined #gluster
22:00 onebree left #gluster
22:07 drue joined #gluster
22:19 zoldar joined #gluster
22:24 zoldar joined #gluster
22:25 coreping_ joined #gluster
22:29 MugginsM joined #gluster
22:39 MugginsM joined #gluster
22:45 zoldar joined #gluster
22:48 nage joined #gluster
22:50 zoldar joined #gluster
22:50 coreping_ joined #gluster
23:15 cyberbootje joined #gluster
23:20 Pupeno joined #gluster
23:31 haomaiwa_ joined #gluster
23:58 gbox joined #gluster
23:58 zhangjn joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary