Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:23 nangthang joined #gluster
00:23 Gill joined #gluster
00:37 RayTrace_ joined #gluster
00:53 pellaeon joined #gluster
00:54 pellaeon Hi, I am trying to run gluster server on FreeBSD, I built from master, everything seems to be working fine, except I cannot mount
00:55 pellaeon this is the client log file: https://gist.github.com/pel​laeon/c6c382e8b8f5a995e19f
00:55 glusterbot Title: mnt.log · GitHub (at gist.github.com)
00:56 pellaeon client is mount.gluster , built from master, on Ubuntu
00:57 pellaeon mount.glusterfs, sorry
00:57 alghost what is your commands to create volume and to mount volume
00:58 pellaeon This is the tcpdump result: https://gist.github.com/pellaeon/​c6c382e8b8f5a995e19f#file-tcpdump
00:58 glusterbot Title: gluster debug · GitHub (at gist.github.com)
01:00 pellaeon volume create xxx replica 2 10.0.0.7:/brick2/gv/0 10.0.0.8:/brick2/gv/0
01:00 pellaeon and mount command: mount.glusterfs 10.0.0.7:/xxx /mnt
01:01 pellaeon There doesn't seem to be error or warning message on the servers
01:03 pellaeon I'm sorry, I forgot to redact the volume name in the log, the volume name is actually "saigon"
01:03 alghost give me times
01:03 pellaeon sure
01:04 alghost I'll test that and then I'll inform my result
01:04 pellaeon thanks a lot
01:30 zhangjn joined #gluster
01:30 bcicen joined #gluster
01:37 RayTrace_ joined #gluster
01:45 julim joined #gluster
01:56 haomaiwa_ joined #gluster
01:59 alghost I'm testing the scenario
01:59 alghost and I got the same returns
02:00 alghost I think you should post the bug to bug zilla, if you can
02:00 alghost Sorry, I can't help you :(
02:05 haomaiw__ joined #gluster
02:06 haomaiw__ joined #gluster
02:10 alghost I want to report informations about gluster website. Does anyone know way ? like email someone..
02:12 pellaeon alghost: you mean you tested on FreeBSD ? wow
02:13 pellaeon GlusterFS doesn't say they support FreeBSD
02:13 alghost Oh sorry
02:13 alghost no I tested on ubuntu
02:14 pellaeon ah, this is nice, too, at least it's not an OS issue
02:14 alghost yes, I think this is bug of gluster.
02:15 pellaeon I'm quite surprised... I thought volume with replica 2 should be a pretty common configuration, isn't it ?
02:15 alghost yes
02:16 alghost also I tested distribute volume.
02:16 alghost but not working
02:16 alghost failed to get the 'volume file'
02:16 alghost like you
02:16 pellaeon wow, a broken master, I'm surprised...
02:16 alghost yes I think so
02:17 alghost I heard master is stable.. but now
02:17 alghost I don't understand
02:17 baojg joined #gluster
02:19 cliluw joined #gluster
02:21 harish joined #gluster
02:22 ira joined #gluster
02:22 baojg joined #gluster
02:23 pellaeon alghost: do you know a version that works?
02:24 alghost actually I'm using stable version 3.7.4
02:24 alghost mistake
02:24 alghost lastest version
02:24 alghost you can download at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/
02:24 glusterbot Title: Index of /pub/gluster/glusterfs/LATEST (at download.gluster.org)
02:25 pellaeon the lastest seems to be 3.7.4
02:26 pellaeon I'll see if I can build that version on FreeBSD...
02:27 alghost ok, cheer
02:27 pellaeon alghost: thanks!
02:34 JoeJulian pellaeon: I assume you started the volume?
02:34 pellaeon JoeJulian: yes
02:35 JoeJulian Can I see "gluster volume status"
02:37 pellaeon JoeJulian: give me some time, I just stopped gluster cause I want to test 3.7.4
02:37 pellaeon restarting it again
02:37 pellaeon (master, I mean)
02:38 RayTrace_ joined #gluster
02:39 alghost In my case
02:39 alghost ubuntu, volume: distributed
02:39 pellaeon hmm I started glusterd first, then I issue volume start, it says failed, but when I start it again, it says already started
02:39 alghost all of status is fine
02:39 pellaeon the status now:
02:39 pellaeon gluster> volume status
02:39 pellaeon Status of volume: saigon
02:39 pellaeon Gluster process                             TCP Port  RDMA Port  Online  Pid
02:39 pellaeon ---------------------------------------​---------------------------------------
02:39 pellaeon Brick a.gluster:/brick2/gv/0                49153     0          Y       28649
02:39 glusterbot pellaeon: ---------------------------------------​-------------------------------------'s karma is now -6
02:39 pellaeon Brick b.gluster:/brick2/gv/0                49152     0          Y       2459
02:39 pellaeon NFS Server on localhost                     N/A       N/A        N       N/A
02:39 pellaeon Self-heal Daemon on localhost               N/A       N/A        Y       28655
02:40 pellaeon NFS Server on b.gluster                     N/A       N/A        N       N/A
02:40 pellaeon Self-heal Daemon on b.gluster               N/A       N/A        Y       2465
02:40 pellaeon Task Status of Volume saigon
02:40 pellaeon ---------------------------------------​---------------------------------------
02:40 glusterbot pellaeon: ---------------------------------------​-------------------------------------'s karma is now -7
02:40 pellaeon There are no active volume tasks
02:40 pellaeon oops
02:40 JoeJulian Dude... you used gist for everything else.... :P
02:41 pellaeon sorry, I wasn't thinking :-p
02:42 pellaeon https://gist.github.com/pel​laeon/c6c382e8b8f5a995e19f
02:42 glusterbot Title: gluster debug · GitHub (at gist.github.com)
02:42 JoeJulian So the error looks like it's coming from glusterd (port 24007). Check the log (/var/log/glusterfs/etc-gl​usterfs-glusterd.vol.log)
02:42 JoeJulian And I've got to run. I'll check back later.
02:43 JoeJulian I know CI tests are run against one of the BSDs. I'm not sure which one though.
02:43 JoeJulian I expect it to work.
02:43 pellaeon NetBSD, I think
02:43 pellaeon aha
02:44 pellaeon https://gist.github.com/pellaeon/c6c​382e8b8f5a995e19f#file-gistfile1-txt
02:44 glusterbot Title: gluster debug · GitHub (at gist.github.com)
02:45 pellaeon checking allow-insecure values...
02:50 pellaeon I didn't set option rpc-auth-allow-insecure on
02:50 pellaeon retrying mount
02:51 pellaeon yay! it mounts!
02:51 pellaeon JoeJulian: thank you
02:54 pellaeon I've never noticed that log file :-p
02:54 neha_ joined #gluster
02:55 alghost pellaeon:  you mean you mount gluster master ver ?
02:55 pellaeon alghost: yes
02:56 pellaeon both client and server are built from master
02:56 alghost could you show me volume info?
02:56 hchiramm_home joined #gluster
02:56 pellaeon you mean the volume file?
02:57 alghost I mean gluster volume XXX info
02:57 alghost I want to know options
02:57 alghost I set allow-insecure on, but same
02:58 pellaeon you also need to modify /etc/glusterfs/glusterd.vol
02:58 pellaeon I followed instruction in this post https://www.gluster.org/pipermail/glu​ster-users/2013-December/015125.html
02:58 glusterbot Title: [Gluster-users] Error when trying to connect to a gluster volume (with libvirt/libgfapi) (at www.gluster.org)
03:01 haomaiwa_ joined #gluster
03:02 TheSeven joined #gluster
03:17 badone_ joined #gluster
03:19 hagarth joined #gluster
03:29 EinstCrazy joined #gluster
03:29 Lee1092 joined #gluster
03:39 RayTrace_ joined #gluster
03:49 nishanth joined #gluster
03:50 atinm joined #gluster
03:51 overclk joined #gluster
03:56 itisravi joined #gluster
03:59 [7] joined #gluster
04:01 haomaiwa_ joined #gluster
04:11 aravindavk joined #gluster
04:17 RameshN joined #gluster
04:17 gildub joined #gluster
04:20 shubhendu joined #gluster
04:28 overclk joined #gluster
04:32 dlambrig joined #gluster
04:36 ir8 joined #gluster
04:36 ir8 greetings.
04:37 ir8 Question I have two node cluster using: replica 2. I will need to expand it to replica 3.
04:37 ir8 What is actually needed to be done?
04:37 vimal joined #gluster
04:41 RayTrace_ joined #gluster
04:44 itisravi ir8: 1. gluster volume add-brick <volname> replica 3  hostname:brickname
04:44 itisravi 2. gluster volume heal <volname> full
04:45 itisravi That should poulate the 3rd brick.
04:45 itisravi populate*
04:47 overclk joined #gluster
04:50 sakshi joined #gluster
04:58 rafi joined #gluster
05:00 Bhaskarakiran joined #gluster
05:01 haomaiwa_ joined #gluster
05:06 DV joined #gluster
05:08 Manikandan joined #gluster
05:09 aravindavk joined #gluster
05:10 Manikandan joined #gluster
05:11 nangthang joined #gluster
05:24 gem_ joined #gluster
05:28 vimal joined #gluster
05:35 ashiq joined #gluster
05:35 hgowtham joined #gluster
05:38 overclk joined #gluster
05:40 vmallika joined #gluster
05:42 RayTrace_ joined #gluster
05:46 hagarth joined #gluster
05:48 alghost exit
05:48 alghost left #gluster
05:51 alghost joined #gluster
05:53 Saravana_ joined #gluster
05:57 rjoseph joined #gluster
05:59 deepakcs joined #gluster
06:00 free_amitc_ joined #gluster
06:01 amitc__ joined #gluster
06:01 Trefex joined #gluster
06:01 maveric_amitc_ joined #gluster
06:01 haomaiwa_ joined #gluster
06:02 ramky joined #gluster
06:05 XpineX joined #gluster
06:07 zhangjn joined #gluster
06:07 skoduri joined #gluster
06:12 atalur joined #gluster
06:12 jtux joined #gluster
06:19 baojg joined #gluster
06:19 jwd joined #gluster
06:21 ctria joined #gluster
06:22 aravindavk joined #gluster
06:24 ctrianta joined #gluster
06:27 hchiramm_home joined #gluster
06:29 hagarth joined #gluster
06:32 Manikandan joined #gluster
06:33 Manikandan joined #gluster
06:34 karnan joined #gluster
06:38 mhulsman joined #gluster
06:39 baojg joined #gluster
06:43 RayTrace_ joined #gluster
06:44 mhulsman2 joined #gluster
06:46 gem joined #gluster
06:48 mhulsman joined #gluster
06:49 alghost left #gluster
06:50 alghost joined #gluster
06:52 ramky joined #gluster
07:01 haomaiwa_ joined #gluster
07:03 [Enrico] joined #gluster
07:03 [Enrico] joined #gluster
07:05 anil joined #gluster
07:10 nangthang joined #gluster
07:13 hgichon0 joined #gluster
07:17 dlambrig joined #gluster
07:31 [Enrico] joined #gluster
07:37 onorua joined #gluster
07:38 frakt joined #gluster
07:40 fsimonce joined #gluster
07:41 hagarth joined #gluster
08:01 haomaiwa_ joined #gluster
08:06 arcolife joined #gluster
08:08 baojg joined #gluster
08:09 neha_ joined #gluster
08:11 Pupeno joined #gluster
08:14 ramky joined #gluster
08:14 jcastill1 joined #gluster
08:18 dlambrig joined #gluster
08:18 baojg joined #gluster
08:19 jcastillo joined #gluster
08:20 RayTrace_ joined #gluster
08:20 [Enrico] joined #gluster
08:21 DV joined #gluster
08:21 alghost joined #gluster
08:23 Akee joined #gluster
08:29 muneerse joined #gluster
08:36 LebedevRI joined #gluster
08:40 doekia joined #gluster
08:46 arcolife joined #gluster
08:47 archit_ joined #gluster
08:58 neha joined #gluster
09:00 neha_ joined #gluster
09:01 haomaiwa_ joined #gluster
09:01 neha_ joined #gluster
09:03 doekia joined #gluster
09:15 poornimag joined #gluster
09:18 mash- joined #gluster
09:20 mash333 joined #gluster
09:43 RameshN /msg nickserv help sendpass
09:43 overclk joined #gluster
09:43 hgichon joined #gluster
09:46 gildub joined #gluster
09:47 hchiramm joined #gluster
09:49 poornimag joined #gluster
09:49 skoduri joined #gluster
09:58 onorua joined #gluster
10:00 dusmant joined #gluster
10:01 haomaiwa_ joined #gluster
10:08 hchiramm_ joined #gluster
10:14 pkoro joined #gluster
10:16 pkoro joined #gluster
10:20 onorua joined #gluster
10:32 Bhaskarakiran_ joined #gluster
10:34 DV joined #gluster
10:42 harish joined #gluster
10:44 DRoBeR joined #gluster
10:46 baojg joined #gluster
10:49 jiffin joined #gluster
10:57 Leildin JoeJulian, I hope you're sleeping tight and all, when you have a sec I tried "renaming all the .vol files under /var/lib/glusterd/vols and running "pkill glusterd ; glusterd --xlator-option *.upgrade=on -N ; glusterd""
10:58 Leildin what i did is mv the files from  /var/lib/glusterd/vols/data to a safe place and did the command
10:58 Leildin glusterd would not restart after that
10:58 Bhaskarakiran_ joined #gluster
10:58 gem_ joined #gluster
10:59 Leildin I moved my files back into place and rebooted to get everything back to normal until you wake up again
10:59 nangthang joined #gluster
11:01 haomaiwa_ joined #gluster
11:09 skoduri joined #gluster
11:10 rjoseph joined #gluster
11:16 DRoBeR joined #gluster
11:19 mhulsman1 joined #gluster
11:19 mhulsman2 joined #gluster
11:19 dusmant joined #gluster
11:25 mhulsman joined #gluster
11:30 poornimag joined #gluster
11:32 kkeithley joined #gluster
11:34 gem_ joined #gluster
11:37 chirino joined #gluster
11:37 KennethDejonghe joined #gluster
11:46 zhangjn joined #gluster
11:53 arcolife joined #gluster
11:55 jcastill1 joined #gluster
12:00 pkoro joined #gluster
12:01 jcastillo joined #gluster
12:01 haomaiwa_ joined #gluster
12:03 Slashman joined #gluster
12:07 DV joined #gluster
12:17 jtux joined #gluster
12:18 rjoseph joined #gluster
12:25 skoduri_ joined #gluster
12:28 rgustafs joined #gluster
12:39 hagarth joined #gluster
12:39 julim joined #gluster
12:40 _Bryan_ joined #gluster
12:43 onorua joined #gluster
12:45 atinm joined #gluster
12:47 Saravana_ joined #gluster
12:52 Manikandan joined #gluster
12:53 Manikandan joined #gluster
12:54 onorua joined #gluster
12:56 mhulsman1 joined #gluster
12:58 ira joined #gluster
12:59 firemanxbr joined #gluster
12:59 unclemarc joined #gluster
13:13 haomaiwa_ joined #gluster
13:14 shyam joined #gluster
13:16 haomaiwa_ joined #gluster
13:26 mpietersen joined #gluster
13:27 mpietersen joined #gluster
13:38 sblanton joined #gluster
13:41 harold joined #gluster
13:41 ramky joined #gluster
13:46 phantasm66 joined #gluster
13:48 DRoBeR left #gluster
13:48 phantasm66 Hi folks :) I'm running glusterfs 3.7.2 on centos 6.5 and am seeing differing "size on disk" for a gluster mounted volume.
13:49 phantasm66 When i run a typical 'df -h' i see the gluster mounted volume is using 109G
13:50 phantasm66 When i run 'du -sh' on the gluster mounted volume, i see it is using 529M
13:50 skoduri joined #gluster
13:50 phantasm66 FTR: the latter is actually correct
13:50 phantasm66 i have gluster quotas disabled
13:51 phantasm66 and lsof reports no stale (deleted) open file handles
13:51 phantasm66 Any ideas what might be going on here?
13:52 phantasm66 i could live with a few MBs difference, but when the kernel (df -h) thinks i'm using 109GBs more than I am... that's kind of a problem
14:00 JoeJulian @lucky sparse files
14:00 glusterbot JoeJulian: https://en.wikipedia.org/wiki/Sparse_file
14:01 64MADVP3L joined #gluster
14:02 JoeJulian phantasm66: Probably that. ^
14:03 dgandhi joined #gluster
14:05 phantasm66 @JoeJulian is there a way to verify that? or some way to defrag a gluster volume while it's online?
14:08 JoeJulian du --apparent-size
14:09 phantasm66 @JoeJulian:
14:09 phantasm66 https://gist.github.com/phan​tasm66/a17e53869de065a70256
14:09 glusterbot Title: gist:a17e53869de065a70256 · GitHub (at gist.github.com)
14:09 phantasm66 Doesn't seem to agree with what df -h is still saying
14:11 phantasm66 I've also noticed on occassion that the mounted gluster volume 'df -h' results fluctuate wildly between a minute or two
14:11 JoeJulian @pasteinfo
14:11 glusterbot JoeJulian: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
14:11 phantasm66 particularly at night - when the volume has no apparent writes
14:12 phantasm66 @JoeJulian: https://gist.github.com/phan​tasm66/05edcf1acb6d96ad0999
14:13 phantasm66 these are not docker containers, btw.. they are real, physical hosts that run a few docker containers completely outside the scope of gluster
14:14 JoeJulian phantasm66: In /data/brick/maillist where's the mount?
14:14 firemanxbr joined #gluster
14:14 phantasm66 sorry, I'm not understanding
14:15 JoeJulian Is one of those directories is a mounted filesystem?
14:15 JoeJulian s/is a/a/
14:15 glusterbot What JoeJulian meant to say was: Is one of those directories a mounted filesystem?
14:16 phantasm66 https://gist.github.com/phan​tasm66/122523901d53cdc20ea0
14:16 glusterbot Title: gist:122523901d53cdc20ea0 · GitHub (at gist.github.com)
14:17 JoeJulian So you're not using your ephemeral for your bricks? Because that says it's /mnt/ephemeral and your bricks are on /data/brick/maillist
14:18 onorua joined #gluster
14:21 phantasm66 how is it that I write to /mnt/ephemeral/personalization/ on one host (running gluster and replicating with 11 others), and see that file in the same directory on the 11 other hosts then?
14:21 phantasm66 It sounds like I'm doing something wrong, but I'm just not clear on what that is...
14:22 JoeJulian paste up /proc/mounts
14:23 JoeJulian Ah, I see. You posted the volume mount. That's where that confusion came from.
14:24 phantasm66 I definitely seem to be having the same files in any of the /data/brick/maillist/personalization/ directories that I do in /mnt/ephemeral/personalization/
14:24 phantasm66 https://gist.github.com/phan​tasm66/c051f7715646ddb6058f
14:24 glusterbot Title: gist:c051f7715646ddb6058f · GitHub (at gist.github.com)
14:26 phantasm66 @JoeJulian: here is my /proc/mounts: https://gist.github.com/phan​tasm66/983b7aac644fb61cb955
14:27 JoeJulian Ok, your bricks exist on your root filesystem. So the files that exist outside of the brick root still count toward df.
14:28 * phantasm66 face palm
14:28 JoeJulian :)
14:28 phantasm66 so, like everything else on / ?
14:28 chirino joined #gluster
14:29 pdrakeweb joined #gluster
14:29 JoeJulian on all your servers.
14:30 baojg joined #gluster
14:30 phantasm66 so I should have setup this volume on it's own mounted *partition*, right?
14:30 JoeJulian The big fluctuation is probably the log rotation at night.
14:30 onorua joined #gluster
14:30 JoeJulian Typically, yes. Make /data/brick a partition mount.
14:31 phantasm66 fwiw: gluster did warn me about this ;)
14:31 phantasm66 I just told it to continue
14:31 phantasm66 I'm too thick to understand *why* it was warning me
14:31 JoeJulian Yeah, someone was thinking when they wrote that warning. :D
14:31 phantasm66 so.. thank you sir
14:31 JoeJulian You're welcome.
14:31 phantasm66 cheers
14:31 Iouns joined #gluster
14:33 Leildin JoeJulian, can I borrow a bit of your time ?
14:33 mreamy joined #gluster
14:34 hchiramm_home joined #gluster
14:36 Iouns Hi guys, I have a very slow gluster 3.4 cluster, with two nodes, one brick each in replicate topology. gluster nfs server is failing to receive some files with following log output in nfs.log : http://pastebin.com/bVzc1jeg . Can someone help me on this?
14:36 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:37 Iouns http://fpaste.org/268931/
14:37 glusterbot Title: #268931 Fedora Project Pastebin (at fpaste.org)
14:38 JoeJulian Leildin: "what i did is mv the files from  /var/lib/glusterd/vols/data to a safe place" not the same as renaming .vol files. Most of the rest of the files are needed to make that command successful.
14:40 calavera joined #gluster
14:40 Leildin my question regarding that was do I rename ALL .vol files or just the ones regarding bricks ?
14:41 JoeJulian Iouns: Wow... haven't seen that in a long while. You're in a split-brain state. The file was created with a different gfid on both replica. You'll need to remove one of them. ,,(splitmount) should still work on 3.4 I think.
14:41 glusterbot Iouns: https://github.com/joejulian/glusterfs-splitbrain
14:41 JoeJulian Leildin: all
14:41 _maserati joined #gluster
14:42 Leildin I only moved .vol files, including "data.tcp-fuse.vol"
14:42 Iouns JoeJulian, that's what I suspected, but gluster volume info split-brain reports nothing, is it normal?
14:42 JoeJulian Yes
14:42 JoeJulian It's an edge case that has since been addressed.
14:43 JoeJulian Leildin: Hmm.
14:43 JoeJulian Leildin: do it again and "glusterd --debug" as the last step so we can see why it's failing.
14:44 Leildin data-rebalance.vol <- contains multiple volume data-client-0 entries and the last two are my two new bricks with 1 and 2 instead of 0 is that maybe causing problems ?
14:45 JoeJulian I wonder if it thinks it's still in the middle of a rebalance.
14:45 Leildin status says failed with 0 everywhere regarding any action taken
14:45 Iouns thank you JoeJulian, I will dig this
14:45 Leildin I'll try --debug
14:46 neofob joined #gluster
14:47 Leildin mv *.vol ../ to put the .vol files in a safe place
14:47 Leildin next "pkill glusterd ; glusterd --xlator-option *.upgrade=on -N ; glusterd --debug"
14:47 Leildin should I stop my volume?
14:48 JoeJulian no need
14:49 ajneil joined #gluster
14:50 Leildin http://ur1.ca/ntfek
14:50 glusterbot Title: #268939 Fedora Project Pastebin (at ur1.ca)
14:50 Leildin last few lines of what was written during that command
14:50 Leildin sorry for french stuff, tell me if you want anything translated
14:51 Leildin do you want complete log ?
14:52 JoeJulian Leildin: Line 13.
14:52 JoeJulian Does that path exist?
14:52 JoeJulian Why isn't it a file?
14:53 Leildin N'est pas un dossier <- is not a folder
14:53 JoeJulian Ah, so it doesn't exist.
14:54 Leildin nope
14:54 JoeJulian Well there's your problem.
14:54 Leildin the /info part is non existant
14:54 Leildin ls -l /var/lib/glusterd/vols/data.gls-saf​ran1.gluster-bricks-brick1-data.vol
14:54 Leildin -rw------- 1 root root 2105 17 sept. 17:25 /var/lib/glusterd/vols/data.gls-saf​ran1.gluster-bricks-brick1-data.vol
14:54 glusterbot Leildin: -rw-----'s karma is now -5
14:55 JoeJulian heh
14:55 Leildin that file does exist
14:55 Leildin and my karma is waaaay lower than that :p
14:55 JoeJulian That was -rw-*'s karma, not yours.
14:56 EinstCrazy joined #gluster
14:56 JoeJulian So did you save that info file on any of your other servers? (when I had you mv vols to /tmp)?
14:56 Leildin well great news, there is no other server !
14:56 Leildin we use gluster with a single node
14:57 Leildin (don't ask, I had 4 nodes in mind when I thought up how to use gluster but I was shot down)
14:58 JoeJulian Ah, ok, then you'll need to delete /var/lib/glusterd/vols/data.gls-saf​ran1.gluster-bricks-brick1-data.vol
14:58 JoeJulian (or mv it)
14:58 JoeJulian That shouldn't be there anyway.
14:58 ramky joined #gluster
14:58 JoeJulian There should be no vol files under /var/lib/glusterd anywhere.
14:59 JoeJulian Until we run that upgrade command.
15:01 haomaiwa_ joined #gluster
15:02 Leildin I think the mv *.vol .. was a mistake, it put them on the same level as the folder containing vital info
15:02 Leildin I'll move them to home folder for safer keeping
15:02 glusterbot Leildin: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
15:02 JoeJulian hush, glusterbot.
15:02 Leildin oO
15:02 Leildin he hates me
15:03 JoeJulian Just bad regex matching.
15:03 JoeJulian Someone was lazy...
15:03 * JoeJulian looks around for someone else to blame.
15:03 Leildin haha :D
15:04 Leildin I seem to be stuck in an infinite loop
15:04 JoeJulian I feel like that some mornings.
15:04 Leildin I meant literally in gluster but yeah same feeling on mondays especiually
15:05 JoeJulian Which step?
15:05 Leildin http://ur1.ca/ntfi7
15:05 glusterbot Title: #268955 Fedora Project Pastebin (at ur1.ca)
15:05 Leildin it repeats this stuff again and again
15:06 JoeJulian Looks like it worked.
15:06 Leildin I had to ctrl + C it
15:06 JoeJulian So just start glusterd normally and you should be golden.
15:06 Leildin line 53
15:06 Leildin ^C
15:06 JoeJulian Yeah, because you were in debug, running in the foreground.
15:07 Leildin so just a rebalance order now ?
15:07 JoeJulian Why rebalance? Do you have more than one dht brick?
15:07 Leildin "gluster volume rebalance data start"
15:08 Leildin I have 5 bricks, I'm adding two
15:08 JoeJulian All on one server?
15:08 Leildin yes, again, not my decision
15:08 JoeJulian But yes, you should be ready to rebalance.
15:08 Leildin Connection failed. Please check if gluster daemon is operational.
15:09 hagarth joined #gluster
15:09 Leildin damnit
15:09 JoeJulian /var/log/etc.*log
15:10 Leildin not sure what you meant by that
15:11 Leildin there is no /var/log/etc
15:12 JoeJulian gah
15:12 JoeJulian /var/log/glusterfs/etc.*log
15:15 Leildin http://ur1.ca/ntfk2
15:15 glusterbot Title: #268958 Fedora Project Pastebin (at ur1.ca)
15:16 hchiramm_home joined #gluster
15:16 JoeJulian received signum (15), shutting down
15:16 JoeJulian something sent it a kill.
15:17 _maserati I did. sorry
15:18 Leildin damn you _maserati  :p
15:18 _maserati ;)
15:18 Leildin is it referring to my ctrl + C earlier on ? it really was stuck in an infinite loop
15:18 Leildin should I try the command again and let it run ?
15:18 JoeJulian It's supposed to be earlier. We had it running in debug mode in the foreground.
15:19 JoeJulian Either just "glusterd" or start it via your distro's service system, ie. systemctl start glusterd
15:19 shyam joined #gluster
15:21 amye joined #gluster
15:21 _maserati lol... i feel so shady today
15:21 Leildin same stuff as before !
15:21 Leildin [root@gls-safran1 glusterfs]# gluster volume rebalance data start
15:21 Leildin volume rebalance: data: success: Initiated rebalance on volume data.
15:21 _maserati to make my boss shut up (to give me the more time i need to do it right) i'm mounting our main gluster server through this "failover" gluster node and just ln -s the dirs to the share needed
15:21 Leildin if I status it's status failed and 0 for every value
15:22 Leildin good _maserati do it right ! I got pushed into a useless setup
15:23 _maserati Leildin: you wouldnt beleive the hurricane ive been put in. im considering finding another job. they have 1 linux guy, me.....
15:23 JoeJulian It always costs more time to do it wrong.
15:23 _maserati not when you have a noon deadline
15:23 JoeJulian _maserati: been there, done that.
15:23 Leildin it also costs JoeJulian more time having to help me so many times
15:23 _maserati Me and Joe got our own dance going on right now, and until i figure THAT problem out, i dont want to do it right
15:24 JoeJulian I love deadlines. I especially love the "whoosh" sound they make when they go by.
15:24 Leildin :D
15:24 _maserati lmao
15:24 Leildin you at least have deadlines
15:24 _maserati oh this is my first
15:24 Leildin we have wished upon completion times
15:24 _maserati it's quite a treat
15:24 _maserati usually it's "We need this yesterday"
15:24 JoeJulian We have a "We would like to have a product to begin testing in the first quarter of 2016".
15:25 Leildin just do your best, that's all that counts
15:25 _maserati Man, this is a gluster and a mental health support channel <3
15:25 JoeJulian Anyway... Leildin I'd check the rebalance logs.
15:26 m0zes joined #gluster
15:26 JoeJulian I've always called our channel "A coffee shop with a bunch of sysadmins, and someone's grandmother is the barista."
15:26 Leildin same symptom in data-rebalance.vol
15:27 JoeJulian remind me. I've got 99 bugs I'm juggling.
15:27 Leildin volume data-client-0 <- on first 6 bricks
15:28 Leildin volume data-client-1 <- on brick 7
15:28 Leildin volume data-client-2 <- on brick 8
15:28 Leildin should I try renaming correctly ?
15:28 Leildin 0 to 7 ?
15:28 JoeJulian Wow... that's really cool.
15:29 JoeJulian can you tar up your /var/lib/glusterd and share it with me somehow.
15:29 Leildin that sound kinky but sure
15:30 Leildin PM me your e-mail ?
15:30 JoeJulian me@joejulian.name
15:30 Leildin no I'll share an ftp
15:31 dlambrig joined #gluster
15:31 JoeJulian It's all over the internet, no need to be private.
15:32 _maserati damnit!
15:32 _maserati my shinnanigans arent working
15:32 Leildin busted :D
15:32 _maserati i cant access my symlink thru the share for some reason
15:35 Leildin don't go spending more time fixing your workaround rather than completing the main project :p
15:35 _maserati i cant complete main project in 2.5 hours! its work around or failure
15:35 _maserati the point of the workaround is to give me more time to do it right
15:36 skoduri joined #gluster
15:36 Leildin JoeJulian, received my mail ?
15:39 JoeJulian yep
15:40 JoeJulian _maserati: Want me to call your manager? ;)
15:40 Leildin I'm really starting to wonder if I should just rename "volume data-client-0" correctly and pray it helps
15:41 _maserati JoeJulian: I will send you a strongly worded email if you do!
15:41 JoeJulian You could try, Leildin. I think you've found a bug.
15:44 _maserati f**k yes! workaround complete.
15:44 _maserati time to go home :D
15:44 Leildin yeay !
15:45 JoeJulian Congrats, _maserati
15:45 Leildin I have no idea how to submit a bug ...
15:45 Leildin I'll try and let you know
15:45 Leildin gratz _maserati
15:46 JoeJulian One sec, Leildin, I'm seeing if I can find a repro.
15:47 muneerse2 joined #gluster
15:49 JoeJulian Leildin: Which version of gluster is this?
15:49 Leildin 3.6.2 I believe
15:51 vimal joined #gluster
15:52 Leildin yep 3.6.2
15:53 shyam joined #gluster
15:56 gem joined #gluster
16:00 Leildin should I throw a grenade in my gluster ?
16:01 JoeJulian I don't see that anything like this has been addressed in newer versions.
16:01 _maserati Leildin: is there ever a wrong time to throw a grenade?
16:01 haomaiwa_ joined #gluster
16:02 JoeJulian Ok, Leildin file a bug
16:02 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
16:02 JoeJulian I'll add what I can to it.
16:02 JoeJulian include "gluster volume info" and that data-rebalance.vol
16:03 JoeJulian @targeted rebalance
16:03 JoeJulian @targeted
16:03 glusterbot JoeJulian: I do not know about 'targeted', but I do know about these similar topics: 'targeted fix-layout', 'targeted self heal'
16:03 JoeJulian @targeted fix-layout
16:03 glusterbot JoeJulian: You can trigger a fix-layout for a single directory by setting the extended attribute distribute.fix.layout to any value for that directory. This is done through a fuse client mount. This does not move any files, just fixes the dht hash mapping.
16:04 JoeJulian Leildin: In the mean time, to make your new bricks usable, "find $volume_mount -type d -exec setfattr -n distribute.fix.layout -v 1".
16:05 JoeJulian That won't move files, but it will regenerate the layout.
16:07 Leildin submitting bug, need to create account
16:08 nishanth joined #gluster
16:10 Leildin what component should I put ?
16:10 Leildin distribute ?
16:13 calavera joined #gluster
16:14 squaly joined #gluster
16:21 JoeJulian yes
16:32 Leildin ok bug filed !
16:32 Leildin I might have missed some stuff/details
16:34 Leildin I won't do the layout fix, I really need the redistribution of files, there are no uploads without my say-so on this gluster
16:38 Leildin well, I'm a retard ...
16:38 Leildin I didn't describe my problem
16:38 calavera_ joined #gluster
16:39 JoeJulian Hehe
16:39 Leildin how can I change that part of my bug report ?
16:40 _maserati Isn't that best? Bug Report: Riddle me this, the pattern of my blocks are unlike the pattern of yours, delete me once, and I shall reappear.
16:40 _maserati Keep the devs on their feet.
16:40 JoeJulian Just reply to your own bug report.
16:45 JoeJulian fyi, Leildin "network.frame-timeout: 45" is pretty dangerous and could potentially result in data loss.
16:46 _maserati where are those options put?
16:46 Leildin my boss found that in a forum post
16:46 Leildin would you recommend removing it or what ?
16:47 JoeJulian I would reset that, yes.
16:48 Leildin can I set it to 0 while volume is started ?
16:51 Leildin wait no ...
16:51 Leildin it was 1800 originally
16:51 Leildin I can't remember
17:02 JoeJulian gluster volume reset
17:02 m0zes joined #gluster
17:03 m0zes joined #gluster
17:09 Leildin gluster volume reset network.frame-timeout
17:09 Leildin ?
17:09 Leildin gluster volume reset data network.frame-timeout
17:10 JoeJulian ++
17:10 Leildin ok thx :)
17:10 JoeJulian You can, of course, do that with anything you want to set back to the default.
17:11 Leildin if there's anything else that seems stupid pray tell, I was not involved in these settings
17:11 Rapture joined #gluster
17:16 ghenry joined #gluster
17:20 jobewan joined #gluster
17:20 bluenemo joined #gluster
17:21 bluenemo hi guys. I'm new to gluster and wanted to ask whats the recommended fs for bricks - I found some tutorials with xfs and some with ext4.
17:23 Leildin I think it's anything with extended attributes that really counts, the rest is marginal differences
17:24 Leildin I've done "fine" with xfs
17:31 JoeJulian xfs
17:35 Leildin JoeJulian, I'm changing the data-replication.vol file manually
17:36 Leildin setting values to correct numbers and trying to balance again
17:36 Leildin if it catches fire, I'll be moving to guatemala
17:40 ajneil bluenemo just make sure the inode size is big enough with xfs, the rhel default is too small.
17:44 GB21 joined #gluster
17:44 GB21 left #gluster
17:44 GB21 joined #gluster
17:45 RayTrace_ joined #gluster
17:51 dgbaley joined #gluster
17:52 Leildin JoeJulian, repaired everything by changing values in trusted-data.tcp-fuse.vol and  data.tcp-fuse.vol
17:52 Leildin also data-rebalance.vol obviously
17:53 Leildin I'll try and get a comment on the bug report that explains what I went through but there's still a problem
18:00 GB21_ joined #gluster
18:01 _maserati guatemala... from where?
18:02 JoeJulian ajneil: Actually, performance testing didn't back up that expectation. The default inode sizes are sufficient.
18:15 RayTrace_ joined #gluster
18:21 ajneil JoeJulian: interesting I had not seen any refutation, its seems common sense that if you have to split the atttributes there will be preformance issues.s
18:23 hagarth joined #gluster
18:27 Nebraskka Using 3 nodes, serving web server forum attachments (10-1000KB each file). Replicas are in different datacenters (10ms latency). Is it possible to do reads from local volume only, by skipping consistency check on reads?
18:27 _maserati I think reads will come from the fastest response, your writes will require an ACK from all 3 datacenters
18:29 Nebraskka no problems with ACK from all replicas, i just want to boost speed on reads, because currently without gluster our mediawiki loading pages with images for 200-300ms, and with gluster it's around 3-5 sec
18:29 Nebraskka so it's ok to write slower for consistency, just wanted to boost reads somehow
18:29 _maserati wierd
18:29 Nebraskka even if it would mean skip read consistency
18:30 Nebraskka well this 3-5 sec thing happening on pages with > 20-40 images (all lying on gluster)
18:30 Nebraskka but we don't have such behavior on local fs, without gluster
18:30 _maserati ah
18:30 Nebraskka so blaming read consistency
18:30 _maserati this is a level over my head, you're gonna have to wait for Joe
18:30 Nebraskka i see =) thanks!
18:31 Nebraskka would be interesting if there is some practice with this already exist
18:31 _maserati im sure there's something that'll help
18:36 Nebraskka JoeJulian, if you have a moment, i'll be grateful =]
18:36 Nebraskka thanks for advance
18:49 hgichon joined #gluster
18:55 JoeJulian @php
18:55 glusterbot JoeJulian: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
18:55 glusterbot JoeJulian: --fopen-keep-cache
18:55 JoeJulian There's the summation of all my advice on that issue. :D
19:06 _maserati Oh you and your glusterbot stuffs
19:11 Nebraskka JoeJulian, wow, thanks for tips =)
19:12 Nebraskka one more reason that can slow it down is .htaccess checks (apache2) on every IO, which can be disabled by migrating .htaccess in config and setting AllowOverride None
19:13 sage joined #gluster
19:14 free_amitc_ joined #gluster
19:14 Nebraskka io slow downs coming mostly from loading several images on one page (and maybe .htaccess lookups for each of them), anything else is already done - sessions in redis, cache in memcached
19:14 maveric_amitc_ joined #gluster
19:14 amitc__ joined #gluster
19:14 Nebraskka but those params look interesting, looks like i just need to experiment further
19:15 sage joined #gluster
19:19 _maserati eww apache
19:19 Nebraskka in front of it nginx there
19:19 _maserati Once you go nginx, you never... goback bich
19:19 _maserati that still doesnt rhyme, hmmm
19:19 Nebraskka yeah, but my apps depends on .htaccess that is not so easily migrated to nginx rules
19:20 _maserati whaaat
19:20 _maserati really? i wouldnt know... i was never a big apache fan
19:20 _maserati but i do know nginx is soooooo configurable and easy
19:20 Nebraskka well there are forum software and things like mediawiki
19:20 _maserati =B
19:20 Nebraskka that requires .htaccess rules by default for proper routing
19:20 jwd joined #gluster
19:21 _maserati ah
19:21 _maserati maybe.... symlink .htaccess files to a local storage device?
19:21 Nebraskka but those get rewritten on nginx only my enthusiasts, and those not doing in fast/quality enough in comparison with official vender (that releasing only .htaccess apache version)
19:21 Nebraskka hmm
19:22 jwaibel joined #gluster
19:22 _maserati but then again... with that php stuff, it'll be fast enough
19:23 johnmark joined #gluster
19:24 m0zes joined #gluster
19:24 Nebraskka sounds like a good idea too; glusterfs would ignore symlinks if those would be on it's volume? or it would result in gluster reads too?
19:25 Nebraskka i mean, would storing symlinks on gluster results in gluster read?
19:25 _maserati Yeah that's still a prob
19:25 Nebraskka if so, looks like it would be the same >.> dang
19:25 PaulCuzner joined #gluster
19:25 Nebraskka if nothing from tips would help, there is no way to ignore consistency checks on reads? or gluster just don't work this way?
19:25 Nebraskka by architecture
19:25 _maserati but can apache be configured to look at .htaccess files in a diff location? I know they are supposed to be in the working folder...
19:26 Nebraskka well apache can keep .htaccess on each needed directory right in config that loads only 1 time with apache startup
19:26 _maserati again that is a Joe question
19:26 Nebraskka i'm still not sure those htaccesses are the only reason
19:26 Nebraskka but probably going to experiment
19:26 _maserati yeah if it only needs to load 1 time, then ef it
19:27 RayTrace_ joined #gluster
19:36 ramky joined #gluster
19:40 calavera joined #gluster
20:12 GB21 joined #gluster
20:12 timotheus1 joined #gluster
20:16 bluenemo joined #gluster
20:17 bluenemo I've read some recommendations to use nfs for clients for small files. at what size are files small? are we talking mailservers with maildir - small email text files? My current use case would be a webroot, mostly some php scripts and a big bunch of .jpg images
20:23 bluenemo whats the current recommended backup procedure? I read about snapshots with thin provisioned lvm volumes for bricks, but I dont really want to use lvm. just doing backup via rsync ov the brick dir isnt a good idea right? I would prefer to backup at the brick level, as it appears to be the fastest way of restoring with alotta servers instead of setting up the gluster setup from scratch again and then putting files back into the glusterfs mounted
20:23 bluenemo dirs - is it
20:23 bluenemo ?
20:23 bluenemo also, can I just resize an ext4 or xfs volume a brick is located on? is growing and shrinking ok?
20:30 RayTrace_ joined #gluster
20:52 cholcombe joined #gluster
20:53 _maserati @php
20:53 glusterbot _maserati: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
20:53 glusterbot _maserati: --fopen-keep-cache
21:05 corretico joined #gluster
21:14 DV joined #gluster
21:21 ira joined #gluster
22:39 ws2k3 joined #gluster
23:02 JoeJulian Nebraskka: Yes, there are ways to disable the self-heal checks. If you're sure you want to do this and understand the way gluster works sufficiently to be certain that's what your use case wants, the settings can be found in "gluster volume set help".
23:50 tessier It's been a couple of months but we are getting close to trying out gluster for VM storage again.
23:52 tessier Is gluster volume snapshot known to be robust? I'm thinking about using it to be able to take whole-image backups of VMs.
23:56 tessier If I could suspend the VM, take a snap, unsuspend the VM, backup the snap, that would be huge.
23:57 tessier And then I can run the images through zbackup and keep deduped historical versions of each VM...

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary