Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-05-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 cmd_pancakes ok great...i was also trying to do this more or less live during a maint window so i bailed and figured it wouldn't work probably too early...something else could have been the problem at the time
00:00 cmd_pancakes thanks JoeJulian! i'll report back on my results
00:30 msvbhat joined #gluster
00:45 arpu joined #gluster
00:51 shyam left #gluster
01:19 shdeng joined #gluster
01:49 ilbot3 joined #gluster
01:49 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:00 cholcombe joined #gluster
02:18 riyas joined #gluster
02:51 kramdoss_ joined #gluster
02:56 gyadav__ joined #gluster
03:05 Gambit15 joined #gluster
03:11 skoduri joined #gluster
03:42 k0nsl joined #gluster
03:42 k0nsl joined #gluster
03:47 msvbhat joined #gluster
03:49 itisravi joined #gluster
03:56 k0nsl joined #gluster
03:56 k0nsl joined #gluster
03:58 nbalacha joined #gluster
04:10 riyas joined #gluster
04:12 gyadav__ joined #gluster
04:27 msvbhat joined #gluster
04:29 Karan joined #gluster
04:31 ppai joined #gluster
04:35 amarts joined #gluster
04:36 skumar joined #gluster
04:38 gyadav__ joined #gluster
04:46 Saravanakmr joined #gluster
04:47 buvanesh_kumar joined #gluster
04:51 Prasad joined #gluster
04:51 ankitr joined #gluster
05:00 karthik_us joined #gluster
05:02 mdavidson joined #gluster
05:04 jiffin joined #gluster
05:06 sanoj joined #gluster
05:17 msvbhat joined #gluster
05:21 Saravanakmr joined #gluster
05:28 ndarshan joined #gluster
05:31 amarts joined #gluster
05:32 hgowtham joined #gluster
05:33 prasanth joined #gluster
05:34 kdhananjay joined #gluster
05:39 mbukatov joined #gluster
05:39 msvbhat joined #gluster
05:40 sanoj joined #gluster
05:45 ashiq joined #gluster
05:47 marlinc joined #gluster
05:48 Karan joined #gluster
05:54 apandey joined #gluster
05:57 rafi joined #gluster
05:58 itisravi joined #gluster
06:00 hvisage joined #gluster
06:21 sona joined #gluster
06:24 nbalacha joined #gluster
06:35 Saravanakmr joined #gluster
06:37 [diablo] joined #gluster
06:40 nbalacha_ joined #gluster
06:42 ankitr joined #gluster
06:45 om2_ joined #gluster
06:51 ivan_rossi joined #gluster
06:52 Saravanakmr joined #gluster
07:06 itisravi joined #gluster
07:18 rastar joined #gluster
07:19 ppai joined #gluster
07:25 Saravanakmr joined #gluster
07:30 skumar_ joined #gluster
07:38 amarts joined #gluster
07:47 ppai joined #gluster
08:14 flying joined #gluster
08:18 bartden joined #gluster
08:27 derjohn_mob joined #gluster
08:28 sanoj joined #gluster
08:28 percevalbot joined #gluster
09:08 derjohn_mob joined #gluster
09:11 msvbhat joined #gluster
09:34 jtux joined #gluster
09:36 bwerthmann joined #gluster
09:40 msvbhat joined #gluster
09:40 vinurs joined #gluster
09:41 karthik_us joined #gluster
09:52 amarts joined #gluster
10:35 sona joined #gluster
10:52 cliluw joined #gluster
11:25 amarts joined #gluster
11:34 jtux joined #gluster
11:37 foster joined #gluster
11:38 DV joined #gluster
11:39 Wizek__ joined #gluster
11:53 derjohn_mob joined #gluster
11:53 amarts joined #gluster
12:01 ankitr joined #gluster
12:12 cholcombe joined #gluster
12:18 jtux left #gluster
12:19 shaunm joined #gluster
12:35 alezzandro joined #gluster
12:36 alezzandro joined #gluster
12:38 baber joined #gluster
12:43 Prasad_ joined #gluster
12:50 Prasad joined #gluster
12:51 msvbhat joined #gluster
12:51 k0nsl joined #gluster
12:51 k0nsl joined #gluster
12:54 skumar__ joined #gluster
12:59 sona joined #gluster
13:04 twisted` joined #gluster
13:08 shaunm joined #gluster
13:09 [diablo] joined #gluster
13:09 skoduri joined #gluster
13:09 Karan joined #gluster
13:10 shyam joined #gluster
13:12 jkroon joined #gluster
13:14 k0nsl joined #gluster
13:14 Acinonyx joined #gluster
13:14 bitonic joined #gluster
13:14 ic0n joined #gluster
13:17 vinurs joined #gluster
13:24 mlhess joined #gluster
13:26 cholcombe joined #gluster
13:26 Acinonyx joined #gluster
13:26 k0nsl joined #gluster
13:26 k0nsl joined #gluster
13:26 cholcombe joined #gluster
13:27 ic0n joined #gluster
13:27 bitonic joined #gluster
13:29 derjohn_mob joined #gluster
13:40 sona joined #gluster
13:42 ryno joined #gluster
13:43 bwerthmann joined #gluster
13:43 skylar joined #gluster
13:45 ryno Hi, I have a question for you guys
13:45 misc (what about persons who are not guys ?)
13:47 ryno is it normal I should stop gluster (service glusterfs-server stop) before shutting down a node ?
13:48 ryno If I dont do that the volumes will not be available for 42 seconds (ping-timeout)
13:48 misc mhh, how do you shutdown the node ?
13:49 misc as I suspect halt also stop gluster
13:49 ryno poweroff
13:50 misc mhh, and it go with the complete shutdown sequence, that's not a brutal shutdown ?
13:52 ryno I got the same behaviour if I use reboot
13:52 misc what setup do you have ?
13:53 ryno debian 8, glusterfs-server:amd64/jessie 3.8.11-1
13:54 ryno 4 nodes with 3 volumes 2X4 replicated distributed
13:54 misc ok, that's indeed weird
13:55 ryno it's not 2X4 but 2X2 =4
13:56 saali joined #gluster
13:57 ryno I read that it's not good to bypass the 42 sec by reducing the ping-timeout
14:00 ryno I just upgraded from gluster 3.5 but I don't know if I had this behaviour on 3.5 !
14:04 shyam joined #gluster
14:07 ankitr joined #gluster
14:28 cholcombe joined #gluster
14:29 shyam joined #gluster
14:35 skumar joined #gluster
14:37 skumar_ joined #gluster
14:45 kramdoss_ joined #gluster
14:47 farhorizon joined #gluster
14:51 ppai joined #gluster
15:02 wushudoin joined #gluster
15:09 ccha3 I have a replicated on 2 servers and nfs-ganesha. nfs on windows client is so slower compare to nfs linux client
15:10 ccha3 tested with smallfile_cli.py create 1 thread 5000 files in 1 folder
15:11 ccha3 30k size
15:12 ccha3 on windows traffic screen, there is more receive than send
15:12 vbellur joined #gluster
15:12 ccha3 that's so weird, and I don't undestand
15:12 skumar_ joined #gluster
15:12 skumar joined #gluster
15:13 ccha3 at the end of the test, there 3x more recieve than send
15:18 jiffin joined #gluster
15:20 msvbhat joined #gluster
15:27 wdeignan joined #gluster
15:32 bettsc joined #gluster
15:40 alvinstarr joined #gluster
15:51 wdeignan left #gluster
15:54 bettsc I have a question on geo-replication and symlinks. I currently have geo-replication created and in the log file I see a few of these "_GMaster: ENTRY FAILED" that is a symlink. Is there a problem with symlinks on gluster or should I be looking at something else?
15:55 bettsc using debian 8 and gluster 3.10.1
15:55 bettsc bricks are ext4
16:02 baber joined #gluster
16:08 hvisage_ joined #gluster
16:09 bettsc the symlinks that I get errors on are using relative links. I had some with full path links and fixed those by creating the file on the replication host that was missing
16:12 riyas joined #gluster
16:29 shyam joined #gluster
16:30 hvisage ping:
16:30 hvisage is the glusterbot asleep??
16:31 hvisage JoeJulian: I eventually have a stable (from scratch) bootstrapping Centos7 setup that mounts the GlusterFS volumes… I did mention SystemD makes that difficult in the bootstrapping case??
16:32 hvisage thanks for the encouragement and direction pointing
16:34 ivan_rossi left #gluster
16:37 gem joined #gluster
16:46 JoeJulian hvisage: Did you have to trick systemd/mount?
16:47 hvisage No, I had to hammer it with sleep, awk, grep, and then gluster’s fuse mounts a “faking” being moutned for 5secnds, then I have to kick it in the butt…. but it’s appears to be stable….
16:49 hvisage systemd .service unit: https://bitbucket.org/dismyne/gluster-ansibles/src/9b0216a6cf231624f48315d74004d31347e9fc02/ansible/files/glusterfsmounts.service-centos?at=master&fileviewer=file-view-default
16:49 glusterbot Title: dismyne / gluster-ansibles / source / ansible / files / glusterfsmounts.service-centos — Bitbucket (at bitbucket.org)
16:49 hvisage and the test-mounts.sh script called from the service unit: https://bitbucket.org/dismyne/gluster-ansibles/src/9b0216a6cf231624f48315d74004d31347e9fc02/ansible/files/test-mounts.sh?at=master&fileviewer=file-view-default
16:49 glusterbot Title: dismyne / gluster-ansibles / source / ansible / files / test-mounts.sh — Bitbucket (at bitbucket.org)
16:50 hvisage glusterbot: does that earn me some extra karma?
17:00 alvinstarr joined #gluster
17:01 bettsc This is pretty much the exact problem I am having https://bugzilla.redhat.com/show_bug.cgi?id=1350179 except on 3.10.1 I can find any other docs/guides on this symlink problem
17:01 glusterbot Bug 1350179: medium, medium, ---, avishwan, CLOSED EOL, entry failed error with symlink files and dirs using distributed geo replication
17:01 bettsc can't* find
17:13 vinurs joined #gluster
17:13 JoeJulian hvisage++
17:13 glusterbot JoeJulian: hvisage's karma is now 1
17:15 JoeJulian bettsc: is the error still, "_GMaster: ENTRY FAILED"?
17:15 bettsc yes
17:16 JoeJulian Please clone that bug and assign it to 3.10.
17:16 bettsc ok will do
17:19 rafi joined #gluster
17:24 farhoriz_ joined #gluster
17:24 Wizek__ joined #gluster
17:32 jkroon joined #gluster
17:47 riyas joined #gluster
17:49 farhorizon joined #gluster
17:50 shyam joined #gluster
17:56 sona joined #gluster
18:00 rastar joined #gluster
18:16 shyam joined #gluster
18:22 hvisage why oh why, is *nano* a *_dependency_* for glusterfs-ganesha ???? O_O O_o o_o o_O
18:22 * JoeJulian raises an eyebrow
18:22 ndevos wow, awesome!
18:23 ndevos hvisage: where did you see that?
18:23 hvisage so if I remove nano, yum also removes glusterfs-ganesha  !!!
18:23 ndevos what distribution is that?]
18:24 hvisage Centos 7, glusterfs 3.10
18:24 hvisage https://pastebin.com/xzWyTkrF
18:24 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:25 hvisage Sorry sorry glusterbot, here it is: https://paste.fedoraproject.org/paste/AafVPEpq~kCeXsEckaaMjV5M1UNdIGYhyRLivL9gydE=
18:25 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
18:25 ndevos ah, so glusterfs-ganesha depends on pcs (for pacemaker), pcs depends on python-clufter, and that depends on nano
18:28 hvisage I kept wondering why those packages kept been added/removed in my ansibles ;(
18:29 hvisage YEah ndevos that appears to be the case then……… quite an unintended consequence when you remove nano to not have it hanging around..
18:29 ndevos and python-clufter really depends on nano, it is hardcoded in this section  - https://src.fedoraproject.org/cgit/rpms/clufter.git/tree/clufter.spec#n74
18:29 glusterbot Title: clufter.spec - rpms/clufter.git - clufter (at src.fedoraproject.org)
18:31 ndevos you could open an issue on https://pagure.io/clufter and ask why nano is required... maybe it is an old dependency - or you send a patch to do whatever task it does in python
18:31 glusterbot Title: Overview - clufter - Pagure (at pagure.io)
18:34 ndevos hvisage: aha, there is a default editor set, and that is nano - the docs mention why an editor needs to be present https://pagure.io/clufter/blob/master/f/__root__/doc/env-vars.txt
18:34 glusterbot Title: Tree - clufter - Pagure (at pagure.io)
18:35 ndevos hvisage: most tools that depend on an editor use 'vi' as default, you could open an issue and request that as well - it would be more inline with many other tools
18:35 ndevos skoduri: ^
18:38 mlg9000 joined #gluster
18:44 kramdoss_ joined #gluster
18:48 farhorizon joined #gluster
18:58 amarts joined #gluster
18:59 derjohn_mob joined #gluster
19:09 hvisage Thanks ndevos, I’ve raised an issue on pagure.io/clufter about nano
19:23 thayward joined #gluster
19:27 thayward I'm having trouble understanding the gluster architecture in the context of my use case. We want to have data replicated to each site, and have each site be able to operate autonomously. If a site is disconnected from the network, it should keep working. Once it reconnects, new data should be synced both to and from the detached site.
19:27 thayward It sounds like georeplication is not appropriate for this, because it's a master-slave architecture
19:29 rastar joined #gluster
19:38 mlg9000 joined #gluster
19:44 farhorizon joined #gluster
19:49 JoeJulian thayward: have you considered what will happen when both sites change the same file?
19:50 thayward indeed, that'll be a problem and we'll need to define some logic. Figured most recent timestamp wins.
19:50 thayward First app to be implemented will be IMAP, so there are unlikely to be changes, just new files and deletes
19:52 JoeJulian You're probably looking for something like csync
19:53 JoeJulian or unison
19:57 bwerthmann joined #gluster
19:58 bwerthmann is there a way to disable the backtrace handler in glusterd?
20:00 thayward hmm, both csync and unison appear to be designed to sync only two systems
20:02 JoeJulian bwerthmann: not via configuration, no. Out of curiosity, why?
20:02 JoeJulian You can do it with an ld_preload - theoretically.
20:16 bwerthmann JoeJulian: https://bugzilla.redhat.com/show_bug.cgi?id=1447523#c1
20:16 glusterbot Bug 1447523: urgent, unspecified, ---, bugs, NEW , Glusterd segmentation fault in ' _Unwind_Backtrace' while running peer probe
20:17 JoeJulian Oh, nifty.
20:19 JoeJulian mallorn: https://twitter.com/JoeCyberGuru/status/860588382765764609
20:34 shyam joined #gluster
20:50 farhorizon joined #gluster
20:57 farhorizon joined #gluster
22:00 foster joined #gluster
22:02 bwerthmann joined #gluster
22:09 vbellur joined #gluster
22:14 shyam joined #gluster
22:32 shyam joined #gluster
22:38 Vapez joined #gluster
22:38 Vapez joined #gluster
23:23 shyam joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary