Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 RedW joined #gluster
00:27 nangthang joined #gluster
00:39 pdrakeweb joined #gluster
00:42 nishanth joined #gluster
00:57 EinstCrazy joined #gluster
01:01 purpleidea joined #gluster
01:28 Lee1092 joined #gluster
01:34 gnudna joined #gluster
01:36 gnudna left #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:58 nangthang joined #gluster
01:58 haomaiwa_ joined #gluster
02:01 haomaiwa_ joined #gluster
02:18 harish joined #gluster
02:46 nangthang joined #gluster
02:49 yosafbridge joined #gluster
02:52 nangthang joined #gluster
03:01 haomaiwang joined #gluster
03:06 gildub joined #gluster
03:17 shubhendu joined #gluster
03:18 karnan joined #gluster
03:18 zhangjn_ joined #gluster
03:31 Pupeno joined #gluster
03:33 atinm joined #gluster
03:36 sakshi joined #gluster
03:40 nishanth joined #gluster
03:40 gem joined #gluster
03:42 shubhendu joined #gluster
03:51 [7] joined #gluster
04:01 haomaiwa_ joined #gluster
04:01 RameshN joined #gluster
04:02 kshlm joined #gluster
04:07 nbalacha joined #gluster
04:08 kotreshhr joined #gluster
04:08 itisravi joined #gluster
04:09 _joel1 joined #gluster
04:11 neha joined #gluster
04:15 kanagaraj joined #gluster
04:21 kshlm joined #gluster
04:25 zhangjn joined #gluster
04:25 yazhini joined #gluster
04:28 zhangjn_ joined #gluster
04:33 zhangjn joined #gluster
04:33 kotreshhr left #gluster
04:37 calavera joined #gluster
04:39 zhangjn_ joined #gluster
04:43 zhangjn joined #gluster
04:46 karnan joined #gluster
04:53 RameshN joined #gluster
04:56 deepakcs joined #gluster
04:59 zhangjn joined #gluster
05:00 ndarshan joined #gluster
05:01 haomaiwa_ joined #gluster
05:07 ppai joined #gluster
05:08 jiffin joined #gluster
05:08 hgowtham joined #gluster
05:14 ramky joined #gluster
05:27 vimal joined #gluster
05:38 Philambdo joined #gluster
05:40 Bhaskarakiran joined #gluster
05:41 kotreshhr joined #gluster
05:42 Manikandan joined #gluster
05:46 rjoseph joined #gluster
05:48 pppp joined #gluster
05:49 rotbeard joined #gluster
05:52 vmallika joined #gluster
05:54 kshlm joined #gluster
05:56 shubhendu joined #gluster
05:56 hagarth joined #gluster
06:01 haomaiwa_ joined #gluster
06:12 mhulsman joined #gluster
06:14 mhulsman1 joined #gluster
06:16 mhulsman2 joined #gluster
06:21 rgustafs joined #gluster
06:22 overclk joined #gluster
06:24 kdhananjay joined #gluster
06:31 GB21 joined #gluster
06:32 EinstCrazy joined #gluster
06:32 maveric_amitc_ joined #gluster
06:39 raghu joined #gluster
06:44 ramteid joined #gluster
06:57 atalur joined #gluster
07:01 haomaiwa_ joined #gluster
07:04 nangthang joined #gluster
07:05 [Enrico] joined #gluster
07:05 cliluw joined #gluster
07:06 Pupeno joined #gluster
07:07 cuqa_ joined #gluster
07:07 anil joined #gluster
07:16 vimal joined #gluster
07:25 prg3 joined #gluster
07:28 beeradb_ joined #gluster
07:36 shubhendu joined #gluster
07:41 Trefex joined #gluster
08:01 haomaiwa_ joined #gluster
08:04 jcastill1 joined #gluster
08:06 LebedevRI joined #gluster
08:09 jcastillo joined #gluster
08:12 haomaiwang joined #gluster
08:15 jcastillo joined #gluster
08:17 Saravana_ joined #gluster
08:26 arcolife joined #gluster
08:30 suliba joined #gluster
08:33 shubhendu_ joined #gluster
08:41 Slashman joined #gluster
08:56 Saravana_ joined #gluster
08:59 [Enrico] joined #gluster
09:01 17SADNJPU joined #gluster
09:05 atinm joined #gluster
09:34 RedW joined #gluster
09:35 harish joined #gluster
09:40 jcastill1 joined #gluster
09:46 jcastillo joined #gluster
09:47 atinm joined #gluster
10:01 17SADNJ23 joined #gluster
10:09 Saravana_ joined #gluster
10:14 ashiq joined #gluster
10:16 ashiq joined #gluster
10:22 mash333 joined #gluster
10:27 Saravana_ joined #gluster
10:45 Bhaskarakiran joined #gluster
11:01 haomaiwa_ joined #gluster
11:09 akay joined #gluster
11:13 David-Varghese joined #gluster
11:14 ro_ joined #gluster
11:16 Mr_Psmith joined #gluster
11:16 overclk joined #gluster
11:18 Bhaskarakiran joined #gluster
11:23 pdrakeweb joined #gluster
11:23 haomaiwa_ joined #gluster
11:26 rwheeler joined #gluster
11:32 mhulsman joined #gluster
11:41 Romeor WAZZUP
11:49 EinstCrazy joined #gluster
11:52 kotreshhr left #gluster
11:54 amye joined #gluster
12:00 jrm16020 joined #gluster
12:01 ndevos *REMINDER* the weekly Gluster Community meeting is starting now in #gluster-meeting
12:02 hagarth1 joined #gluster
12:04 zhangjn joined #gluster
12:04 EinstCrazy joined #gluster
12:07 atinmu joined #gluster
12:08 David_Varghese joined #gluster
12:11 Mr_Psmith joined #gluster
12:11 jiffin joined #gluster
12:25 ro_ Hey guys - I'm trying to set up a distributed replicated cluster. I'm completely new to gluster. Initially I set up a 4 node cluster just as a test - but when I attempt to start it with 8 nodes it times out.
12:26 ro_ the command I'm using is "gluster volume create test replica 2 transport tcp"
12:26 ro_ with the 8 nodes following - "address:/path"
12:27 theron joined #gluster
12:29 atinmu ro_, are all the nodes in the trusted storage pool
12:30 atinmu ro_, what's gluster peer status output says, is it showing all the nodes as connected?
12:30 ro_ atinmu - yeah it shows all nodes connected
12:31 atinmu ro_, hmm interesting
12:31 atinmu ro_, which gluster version are you using
12:31 ro_ 3.7.3
12:32 atinmu ro_, ok
12:33 atinmu ro_, Could you check glusterd log at all other 7 nodes and see if you can find anything abnormal?
12:34 ro_ atinmu - yeah give me a few, resetting my environment because I had to do some code reviews. Didn't expect to get a response this early sorry
12:34 vimal joined #gluster
12:34 ro_ thanks for the help
12:34 atinmu ro_, np :)
12:39 unclemarc joined #gluster
12:45 jcastill1 joined #gluster
12:48 vimal joined #gluster
12:54 archit_ joined #gluster
12:54 mpietersen joined #gluster
12:55 mpietersen joined #gluster
13:01 mhulsman1 joined #gluster
13:02 jcastillo joined #gluster
13:02 nbalacha joined #gluster
13:06 skylar joined #gluster
13:08 mhulsman joined #gluster
13:08 haomaiwa_ joined #gluster
13:11 julim joined #gluster
13:11 jiffin1 joined #gluster
13:13 bennyturns joined #gluster
13:15 auzty joined #gluster
13:18 atinmu joined #gluster
13:19 plarsen joined #gluster
13:23 bennyturns joined #gluster
13:31 _joel joined #gluster
13:35 pdrakeweb joined #gluster
13:40 Trefex1 joined #gluster
13:45 klaxa|work joined #gluster
13:48 dgandhi joined #gluster
13:49 dgandhi joined #gluster
13:59 theron_ joined #gluster
14:01 haomaiwa_ joined #gluster
14:01 jdarcy joined #gluster
14:03 jiffin joined #gluster
14:06 shubhendu_ joined #gluster
14:16 skoduri_ joined #gluster
14:17 spcmastertim joined #gluster
14:28 mhulsman joined #gluster
14:36 shubhendu__ joined #gluster
14:37 dgbaley joined #gluster
14:38 poornimag joined #gluster
15:05 hchiramm joined #gluster
15:06 hchiramm_ joined #gluster
15:08 _maserati joined #gluster
15:16 neofob joined #gluster
15:16 theron joined #gluster
15:18 cholcombe joined #gluster
15:25 dlambrig_ joined #gluster
15:30 jbrooks joined #gluster
15:31 bluenemo joined #gluster
15:34 jiffin joined #gluster
15:34 bluenemo hi guys. I've got two gluster servers serving disk + ext4 => bricks, combined to one replica 2 transport tcp volume. I'm mounting this volume on a third server via mount -t glusterfs. When I do chown operations with UIDs that dont exist on the serving gluster servrs, I get this error in /var/log/glusterfs/bricks/srv-gfs_fin_web-brick.log  E [marker.c:2573:marker_removexattr_cbk] 0-gfs_fin_web-marker: No data available occurred while creating sy
15:34 bluenemo mlinks
15:35 bluenemo Using Ubuntu 14.04 with the PPA from the doc, glusterfs 3.5.6 built on Sep 16 2015 15:27:33
15:39 bluenemo Internet suggests my ext4 disk is missing mount option user_xattr, which xfs has by default - is xfs the preferred fs to setup?
15:40 skylar yeah, that's what I've always used
15:40 skylar gluster uses xattr's for its own internal purposes
15:40 skylar replication tracking, etc.
15:40 bluenemo I've read both in the doc and am more familiar with ext4 - so I thought I'd go with that
15:40 bluenemo ah ok i see
15:40 bluenemo if I just remount stuff, will this break it?
15:41 bluenemo as in remount with user_xattr enabled
15:41 bluenemo its not production but i've copied a big bunch of data to it already..
15:41 bluenemo dont want to copy again :)
15:42 skylar not sure if you can just remount and have gluster apply the xattr's after volume creation
15:42 skylar but since this is just a replica you can get the underlying data out from the brick mount point with standard UNIX tools
15:44 bluenemo you mean create new bricks?
15:44 bluenemo hm
15:44 bluenemo sounds time consuming :/
15:44 bluenemo ;)
15:44 Driskell joined #gluster
15:45 bluenemo I'll give it a shot. lets see what happens. I'll stop one node before and modify the other, if it dies I'lll kill it and start the other one again :)
15:47 Driskell Hello! Should the Ubuntu 14.04 PPA Gluster 3.7 packages have the fix in for automount on boot (https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/876648) I cannot find the configuration files in /etc/init/. Thanks!
15:47 glusterbot Title: Bug #876648 “Unable to mount local glusterfs volume at boot” : Bugs : glusterfs package : Ubuntu (at bugs.launchpad.net)
15:47 skylar joined #gluster
15:49 Driskell (As in, do the Trust packages here have the fix: https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7)
15:49 glusterbot Title: glusterfs-3.7 : “Gluster” team (at launchpad.net)
15:51 bluenemo Driskell, I'm currently using 3.5.7 also from the ppa (I think you have the newer one), I dont have that issue.
15:52 akay is there a way to install gluster client version 3.7.2 on ubuntu?
15:53 Driskell bluenemo, Thanks! Interesting. After installed it looks though the /etc/init/ conf files aren't there. Are they there for you?
15:54 bluenemo No, just discovered the same :) however you should manage stuff by using gluster
15:54 bluenemo as in sth like gluster volume stop my_vol
15:54 bluenemo I'm not yet sure what started stuff in the first place though.. ;)
15:55 Driskell Maybe yes, thought it be good to automate mount in /etc/fstab safely somehow!
15:56 jcastill1 joined #gluster
15:56 Driskell It does appear in the bug details though at https://forge.gluster.org/glusterfs-core/glusterfs/blobs/bf770361e9e7121f2ba1524ba02f41fbf12d44e8/extras/Ubuntu/README.Ubuntu that it should be the latest packages for Trusty have it all fixed but it seems possibly not the case, at least for the PPA packages.
15:56 glusterbot Title: extras/Ubuntu/README.Ubuntu - glusterfs in GlusterFS Core - Gluster Community Forge (at forge.gluster.org)
15:58 bluenemo Driskell, yeah, I've got mine in /etc/fstab too. works here.
15:58 calavera joined #gluster
16:00 bluenemo skylar, so what I did now was shutdown all mount-only clients and stopped the second gluster server, then stopped the vol on first gluster server and remouted its brick with defaults,user_xattr. I then started the vol again. I couldnt mount it for like two minutes, there was nothing fancy in the logs though.. gluster one was trying to connect two gluster two and couldnt (as its down), but I dont get why it didnt let me mount.. now it works flaw
16:00 bluenemo lessly. ironicly when I was using strace mount -t it "just worked" :P
16:01 bluenemo so I guess now it would be safest to shutdown srv one, start srv two, stop its vol, remount it, shutdown srv two, start srv one, then start srv two.
16:01 skylar I would look in /var/log/glusterfs (or wherever ubuntu puts the gluster logs) and see if there's anything concerning
16:01 jcastillo joined #gluster
16:01 theron_ joined #gluster
16:01 bluenemo btw, who determins who is latest? as in, who determins who syncs from whom?
16:02 bluenemo yeah i have a tail on /var/log/glusterfs/* and */*
16:02 skylar as long as the storage nodes can talk to each other, they sync back and forth automatically
16:02 skylar all clients will talk to both of them
16:02 skylar er, actually it's automatic only if you have self-heal on, which might not be a default
16:03 bluenemo yes, but if I do: (all srvs down) start one, create file A on one, stop one, start two, create file A on two with other content, then start one - what happens?
16:03 bluenemo file will be overwritten on one as older (metadata timestamps and stuff)?
16:04 bluenemo how would I check if I have self-heal on?
16:04 bluenemo (brb)
16:04 skylar each node keeps a changelog and the fuse layer I think sorts out who has the most recent change
16:05 skylar you can set self-heal with "gluster volume set <vol> cluster.data-self-heal"
16:05 JoeJulian @extended attributes
16:05 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
16:06 JoeJulian bluenemo: ^ the #2 link explains how afr determines who's unhealthy.
16:07 bluenemo thanks :)
16:09 bluenemo skylar, should such options show up in gluster volume info my_vol
16:09 spcmastertim joined #gluster
16:11 skylar bluenemo - no, it would be "gluster volume get my_vol all"
16:11 JoeJulian scrolling back... self heal is an integral part of replication and is indeed enabled by default.
16:11 skylar ah, good to know, still new to the game and couldn't remember if I had turned it on or if it was already on
16:12 JoeJulian skylar! You just taught me something.
16:12 JoeJulian That's new.
16:12 bluenemo skylar, gluster volume help doesnt list "get" on 3.5.6
16:13 skylar bluenemo - my first gluster contact started with 3.6 unfortunately, so I don't know the old commands
16:13 skylar I would start with "gluster volume help|grep get" and see if you can find it in a different permutation
16:13 skylar JoeJulian - oh?
16:13 bluenemo dude amazon is annoying me today..
16:13 JoeJulian Prior to 3.7, the defaults are shown in 'gluster volume set help'. Only entries that have been changed show up in 'volume info'
16:14 skylar ooh, thanks, I will note that in our local docs
16:14 bluenemo nope, no get
16:16 bluenemo hm. brick log is telling for just out of the blue "connection to 172.16.0.132:24007 failed (Connection refused)" all other ports too
16:17 bluenemo gluster cli cant connect it says
16:17 bluenemo all services are running though..
16:17 bluenemo but yeah conceringing init scripts - whats the way to restart the daemons themselves?
16:17 skylar does "gluster volume heal my_vol info" say anything?
16:17 skylar as for services, I use "service" on RHEL
16:17 skylar I imagine ubuntu is either upstart or systemd, so that would likely be different
16:18 bluenemo skylar, actually gluster volume status my_vol doesnt work
16:19 bluenemo log also doesnt show anything that tells me much
16:20 JoeJulian ubuntu is upstart. rhel 7 is systemd so get ready to change from service <servicename> start to systemctl start <servicename>
16:21 bluenemo JoeJulian, so you have some stuff working with service gluster restart?
16:21 bluenemo skylar, heal also times out
16:21 JoeJulian bluenemo: Sounds like glusterd is dead.
16:22 JoeJulian To start glusterd in ubuntu, it's "start glusterfs-server" iirc.
16:23 bluenemo nifty, thank you! :)
16:23 bluenemo hm. now nfs and self-heal are off in status (after starting it again). I confused by the logs not telling me "you screwed up $this"
16:24 bluenemo JoeJulian, do you know the other services names for "start"?
16:25 CU-Paul on ubuntu14.04 it's "service glusterfs-server start"
16:25 _joel joined #gluster
16:26 bluenemo so when I stop that, should there be any processess left with "gluster"?
16:26 CU-Paul there probably will be, for me, glusterfsd and glusterfs usually keep running, only glusterd is stopped, iirc
16:26 bluenemo I did a stop glusterfs-server and there is still glusterfsd and glusterfs processes
16:26 CU-Paul so you can killall gluster{fs,fsd} to get the rest
16:27 JoeJulian CU-Paul: "service blah start" for upstart scripts just calls "start blah"
16:27 JoeJulian s/scripts/jobs/
16:27 glusterbot What JoeJulian meant to say was: CU-Paul: "service blah start" for upstart jobs just calls "start blah"
16:27 CU-Paul JoeJulian, did not know that, thanks
16:27 JoeJulian I know way more about upstart than I'd like to admit, nor will I ever use any of this useless knowledge. :)
16:28 bluenemo but they should be stopped by that right?
16:29 bluenemo hm no. after I killed them they wont start via service script.
16:30 bluenemo "reboot"
16:30 JoeJulian If upstart thinks the job is completed and the service is running, you may have to "restart glusterfs-server" instead.
16:32 bluenemo hm yeah so after reboot everything is dead :D haha. I think just mounting with user_xattr might have bugged it a bit..
16:33 bluenemo hm no was just offline. hm. should set auto online or sth
16:33 CU-Paul JoeJulian, I know it's just informational but I'm still getting permission denied in brick logs for MKDIR commands.  That would seem to me to be a key piece of why this node isn't replicating?  Other than "inode for gfid  not found, anonymous fd creation failed" messages that is pretty much the only thing in the brick log.
16:38 bluenemo how do I start the self-heal daemon?
16:39 bluenemo will it start by defualt if there is only one node?
16:39 JoeJulian it will start by default.
16:40 bluenemo ah ok. start force did start it
16:42 bluenemo hm. "No data available occurred while creating symlinks" did not disappear after remounting with user_xattr
16:43 bluenemo meh. guess I have to resetup and switch to xfs.
16:43 bluenemo I started with xfs and then read ext4 for bricks and thought hey :) why not ;)
16:43 JoeJulian bluenemo: Do you currently have geo-replication configured?
16:44 bluenemo no I only have one node running just atm. and in general also no. I'm not sure if I should use gluster for backups too..
16:44 JoeJulian But you did once?
16:44 squizzi_ joined #gluster
16:44 bluenemo no.
16:44 JoeJulian hmm, I wonder how you got marker enabled then.
16:45 bluenemo ?
16:45 bluenemo sry cant follow. marker?
16:45 JoeJulian I scrolled back and looked at the log message you're worried about. It's in marker.c.
16:45 JoeJulian marker is a translator that's used for geo-replication.
16:46 JoeJulian I'm not sure if it's used for anything else now. quota perhaps?
16:46 bluenemo hm. when I copy to new bricks, I can just pull out of /mnt/brick right?
16:46 bluenemo (kill old gluster setup, setup completely new with new disks, attach old brick disk and copy to new mount of new gluster setup)?
16:47 JoeJulian yes
16:47 bluenemo hm. no dont have quota atm either
16:47 bluenemo ok
16:48 bluenemo JoeJulian, did you run a production workload yet? I'm not sure if performance kicks me atm.. I mostly have smaller files, php websites, and a BIG buch of pictures.
16:48 JoeJulian Try gluster volume set $vol geo-replication.indexing off
16:48 bluenemo (as in around 300G)
16:48 JoeJulian I've run many production workloads.
16:48 JoeJulian Here's my ,,(php) recommendations.
16:48 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
16:48 glusterbot --fopen-keep-cache
16:48 bluenemo ah interesting. When I migrated this to xfs I'll get more into performance and caching stuff - good to know :)
16:49 JoeJulian In short, web services should cache as close to the user as possible, as much as you can.
16:49 bluenemo ah I already found and flew over it :) cool
16:50 bluenemo I didnt write the PHP though, just the admin
16:51 JoeJulian Yeah, my recommendations to require changing php.
16:51 JoeJulian s/to/don't/
16:51 glusterbot What JoeJulian meant to say was: Yeah, my recommendations don't require changing php.
16:51 JoeJulian (don't talk on the phone and irc at the same time)
16:52 bluenemo lol the glusterbot :)
16:52 bluenemo hehe :D
16:53 jrm16020 joined #gluster
16:53 bluenemo hm. I'm writing a salt formula for gluster. I think I'll just kick out ext4 and only offer xfs, what do you think?
16:55 Rapture joined #gluster
16:56 JoeJulian I'm ambivalent. I like Theodore, but I kind-of like the xfs code base better. I would leave it open to options though if this is for public consumption. There's a lot of people that want to use zfs or btrfs for their own reasons.
16:56 theron joined #gluster
16:57 JoeJulian Though now that we have EC, the only benefit to zfs is dedup, and I haven't seen it be that much of a benefit.
16:57 bluenemo hm. true. ok :) I'm not sure yet, but I guess I'll put it on github. currently its only for my customers. lets see how it does first.
16:57 bluenemo EC?
16:57 JoeJulian Make it work. Make it better later.
16:57 JoeJulian erasure coding.
16:57 JoeJulian @lucky erasure coding
16:57 glusterbot JoeJulian: https://en.wikipedia.org/wiki/Erasure_code
16:57 bluenemo hm. trying to code so I can add $stuff later :)
16:58 bluenemo but I'm not really the coder - I'm kinda more the admin. well.. sth in between :)
16:58 JoeJulian You're going to write it, use it, hate it, and rewrite it anyway. ;)
16:58 bluenemo :D
17:01 squizzi_ joined #gluster
17:05 RobertLaptop joined #gluster
17:09 GB21 joined #gluster
17:10 Pupeno joined #gluster
17:20 amye joined #gluster
17:22 Pupeno joined #gluster
17:29 squaly joined #gluster
17:30 plarsen joined #gluster
17:33 hagarth joined #gluster
17:36 CU-Paul JoeJulian: my cluster is replicating!  thanks for all your help.  Not sure exactly what it was but I disconnected all clients from all nodes but one and noticed the start of healing/replication.  The healing continued without issue across the other two nodes.
17:39 timotheus1 joined #gluster
17:40 JoeJulian interesting.
17:49 spcmastertim joined #gluster
17:51 _maserati JoeJulian: do you know about supermicro raid punctures/bad blocks and how they can spread if a drive fails and a lazy admin just puts a new drive in
17:51 CU-Paul You said something yesterday about the possibility of being stuck in a healing queue.  Is it also possible that by removing all the other clients, since there was no other activity on the volume except for this one node, that it was able to finish the healing of that node and then allow the new node to start healing?
17:55 Pupeno joined #gluster
17:57 bluenemo ah fun. its not allowed to create /mnt/.glusterfs on a mounted volume :)
17:57 bluenemo makes sense
18:01 JoeJulian CU-Paul: Seems likely.
18:02 JoeJulian _maserati: Nope, hadn't heard about that. Nice "feature".
18:04 _maserati JoeJulian: well, i wonder if that has the possibility of corruping any gluster data =(
18:05 JoeJulian Doesn't seem likely since it sounds like the raid controller is maintaining its own bad block list and just won't put data there.
18:05 _maserati i hope thats the case
18:06 _maserati and the fix is gonna suck.... i need to remove that node from the cluster, wipe the entire system, raid-0 the drives, then re-raid them to whatever i wanted and rebuild the node
18:07 bluenemo JoeJulian, I've switched to xfs and copying data to the new vol now, I'm getting alotta these errors: https://paste.debian.net/313096/
18:07 glusterbot Title: debian Pastezone (at paste.debian.net)
18:08 bluenemo gluster volume status looks happy though. Data is nicely accessible
18:09 bluenemo hm no its not, sry. metadata, as users and groups, are not copied by rsync
18:09 bluenemo used "defaults" for xfs bricks
18:10 bluenemo I'm copying from ext4, an ex brick disk, to a gluster volume now, backed by xfs bricks.
18:12 spcmastertim joined #gluster
18:12 bluenemo hm. no, dont really get why I get this now for creating files. chown works so far. error:  E [marker.c:2573:marker_removexattr_cbk] 0-gfs_fin_web-marker: No data available occurred while creating symlinks
18:12 bluenemo its in /var/log/glusterfs/bricks/srv-gfs_fin_web-brick.log
18:13 bluenemo oh, now it by itself does a LOT of  metadata self heal  is successfully completed,   metadata self heal from source gfs_fin_web-client-0 to gfs_fin_web-client-1,  metadata - Pending matrix:  [ [ 3 3 ] [ 3 3 ] ],  on <gfid:6863a077-a87e-449f-b4ae-028f989055f4>
18:14 zhangjn joined #gluster
18:15 bluenemo is that behavior normal? https://paste.debian.net/313097/
18:15 glusterbot Title: debian Pastezone (at paste.debian.net)
18:19 coredump joined #gluster
18:20 bluenemo i dont have selinux anywhere
18:21 bluenemo hm. nfs is enabled and it does not support extended attributes. however i'm not mounting via nfs..
18:22 bluenemo nope, that wasnt it.
18:22 JoeJulian No, it's marker.
18:22 JoeJulian It's trying to set attributes on files that don't exist.
18:23 JoeJulian gluster volume set $vol geo-replication.indexing off
18:23 JoeJulian I wonder if that happens just by having the geo-rep package installed. I don't usually install it.
18:24 natarej joined #gluster
18:24 natarej_ joined #gluster
18:24 bluenemo JoeJulian, still shows up after the geo-rep off. Found this and my version is below: https://bugzilla.redhat.com/show_bug.cgi?id=1188064
18:24 glusterbot Bug 1188064: unspecified, unspecified, 3.6.3, bugs, MODIFIED , log files get flooded when removexattr() can't find a specified key or value
18:25 bluenemo JoeJulian, on ubuntu there is only glusterfs-{client,server,dbg,common} packages.
18:26 bluenemo hm yes my version is to old. meh.
18:27 bluenemo JoeJulian, do you think I should just update using this ppa? https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.6 I currenlty have the 3.5 ppa
18:27 glusterbot Title: glusterfs-3.6 : “Gluster” team (at launchpad.net)
18:28 bluenemo version there seems 3.6.6..
18:29 bluenemo what ppa do you recommend for production usage?
18:30 spcmastertim joined #gluster
18:31 JoeJulian I'm pretty happy with 3.7
18:36 bluenemo can I just add the new ppa and apt-get upgrade?
18:36 JoeJulian yes
18:39 Slashman joined #gluster
18:43 skylar joined #gluster
19:00 arcolife joined #gluster
19:01 arcolife joined #gluster
19:03 bluenemo JoeJulian, cool, I'll do that. Do I need to stop gluster or can I do that live? I have two servers in rep mode. Also another thing. I'm copying 300gb of files, mostly pics around 1-2mb via rsync from normal disk to volume. Currently I'm getting about 1MB/s :/ I've got 2gb of ram which does buffering.. how much do you recommend?
19:04 bluenemo I want to run this over night so I can scale up the instance for that time
19:05 cocotton joined #gluster
19:07 cocotton Hi channel. I'm trying to install gluster-server on a fresh redhat 6.7 yet yum tells me it does not find xfsprogs. Anyone knows where to get that? I'm unable to find it
19:09 cocotton It seems like I have to buy an add-on for red hat, but we're still using ext4 filesystems :/
19:10 akik cocotton: it's in the base repo
19:10 cocotton That's weird, I can't find it
19:10 _joel joined #gluster
19:10 bluenemo hm. got some files in  gluster volume heal gfs_fin_web info heal-failed, about 2k. what do I do with those?
19:11 akik cocotton: http://mirror.us.leaseweb.net/centos/6/os/x86_64/Packages/
19:11 glusterbot Title: mirror.wdc1.us.leaseweb.net | powered by LeaseWeb (at mirror.us.leaseweb.net)
19:12 alghost joined #gluster
19:13 akik cocotton: run yum -v repolist and see if you have those configured ok
19:14 cocotton Ah I guess they have not been installed since I'm on RH?
19:14 akik cocotton: oh sorry i read that as centos
19:14 cocotton If only I could use centos :'(
19:15 akik but as rhel and centos are binary compatible, you should be install it from there too
19:15 JoeJulian bluenemo: recommendations depend on the use case. :) If you're not in production, I would stop the volume, stop glusterd, upgrade, then start everything again.
19:15 cocotton akik: Ok I'll try this right now :)
19:15 bluenemo JoeJulian, currenlty yes (did just that atm), but how about production?
19:16 JoeJulian Make sure heal info is empty, upgrade one, wait for heal info to be empty again, upgrade the other.
19:16 bluenemo whats also to remember here, is that apt-get dist-upgrade will start a service automatically.
19:17 JoeJulian Yeah... that behavior constantly leaves me baffled.
19:19 bluenemo hm. I forgot that service gluster-stuff stop doesnt stop some of the services
19:20 bluenemo well its np i'll just plan some downtime and make snapshots.
19:23 cocotton akik: Yay it worked :)
19:25 bluenemo hm thats not what I want to hear is it?  Version of Cksums gfs_fin_web differ. local cksum = 2212786344, remote cksum = 588543667 on peer omega
19:25 bluenemo ah found the doc for it
19:26 akik cocotton: i hope you don't get into a dependency hell
19:26 cocotton At this point everything seems fine (I'll admit I'm a bit anxious ;) )
19:28 bluenemo hm after http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected peer probe gives me  peer probe: failed: Probe returned with unknown errno 107
19:29 bluenemo ah my bad..
19:29 bluenemo nice >:) stuff works..
19:30 gem joined #gluster
19:31 bluenemo hm no :) gives me  0-glusterfs: failed to get the 'volume file' from server  0-mgmt: failed to fetch volume file (key:/gfs_fin_web
19:32 bluenemo also volume heal info tells me my vol does not exist
19:33 bluenemo hm healing works, heal info wont find the vol: https://paste.debian.net/313115/
19:33 glusterbot Title: debian Pastezone (at paste.debian.net)
19:35 mpietersen joined #gluster
19:36 arcolife joined #gluster
19:37 tba798 joined #gluster
19:38 mreamy joined #gluster
19:40 bluenemo guess I hit this bug in my update from .5 to .7 https://bugzilla.redhat.com/show_bug.cgi?id=1191176
19:40 glusterbot Bug 1191176: urgent, unspecified, ---, bugs, ON_QA , Since 3.6.2: failed to get the 'volume file' from server
19:42 JoeJulian Oh, debs don't do that automatically?
19:42 JoeJulian One more reason to go with an EL distro... ;)
19:43 bluenemo :P https://bugzilla.redhat.com/show_bug.cgi?id=1191176 script at the end might help.. currenlty testing it
19:43 glusterbot Bug 1191176: urgent, unspecified, ---, bugs, ON_QA , Since 3.6.2: failed to get the 'volume file' from server
19:44 bluenemo thats why I asked can I just upgrade :P
19:48 bluenemo well whats cool is that through all this the web worker client wo a gluster server still has its mountpoint mounted wo any bigger problems :) remounting doesnt work, but the one I didnt touch from the start is still there :D
19:55 Pupeno joined #gluster
20:00 JoeJulian bluenemo: find /var/lib/glusterd/vols -name '*.vol' -exec mv {} {}.orig ; pkill glusterd ; glusterd --xlator-option *.upgrade=on -N ; start glusterfs-server
20:01 JoeJulian May have to do up to "start" on all servers before the start will work.
20:02 mhulsman joined #gluster
20:03 bluenemo yeah I did that, it worked, thanks :)
20:05 bluenemo as for my mount problems - that was port 49153 being blocked
20:05 bluenemo I opened 49152 - 49160 tcp now, is that ok?
20:05 bluenemo ok. starting rsync again. lets see if the errors are gone :)
20:06 JoeJulian ok for now
20:06 JoeJulian @ports
20:06 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:10 bluenemo got a lot of  W [fuse-bridge.c:1230:fuse_err_cbk] 0-glusterfs-fuse: 126497: REMOVEXATTR() /file  now
20:11 DV joined #gluster
20:14 theron joined #gluster
20:16 JoeJulian W = Warning.
20:17 JoeJulian which usually means ignore it.
20:25 virusuy Hey guys, i'm trying to create a replica volume with two nodes, one with 3.4 and the other with 3.7
20:25 virusuy just for fun
20:25 virusuy but isn't working
20:25 virusuy Is there a compatibility matrix ?
20:27 bennyturns joined #gluster
20:28 JoeJulian No
20:32 bennyturns joined #gluster
20:43 bluenemo ok thank you :) Data is copying now, will be back tomorrow. Thanks a lot for your help JoeJulian :) Have a nice evening.
20:44 bennyturns joined #gluster
20:47 johnmark joined #gluster
20:48 jobewan joined #gluster
20:54 a_ta joined #gluster
21:06 Pupeno_ joined #gluster
21:07 spcmastertim joined #gluster
21:13 poornimag joined #gluster
21:17 skoduri_ joined #gluster
21:30 skoduri__ joined #gluster
21:32 skoduri joined #gluster
21:35 lbarfield Has anyone had issues with an intermediate master not syncing all files to it's slave correctly, in a cascading setup?
21:36 dlambrig_ joined #gluster
21:39 JoeJulian I've never heard of that complaint, but then again, I've never heard of anyone describing that configuration, so not sure if that's cause or effect.
21:45 Sjors joined #gluster
21:52 lbarfield JoeJulian: That configuration is described here: https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Geo%20Replication/#exploring-geo-replication-deployment-scenarios
21:52 glusterbot Title: Geo Replication - Gluster Docs (at gluster.readthedocs.org)
21:53 lbarfield Multi-site cascading: A -> B -> C
21:53 JoeJulian Yeah, I've seen the document, just not had anybody in here describing using it.
21:53 JoeJulian That could be because they've not had problems.
21:53 lbarfield Yeah, I have no idea what the problem is.
21:53 JoeJulian Or it might be that there's a relatively small number of people doing it.
21:53 * JoeJulian shrugs.
21:54 lbarfield When I start it up about 5% of the files get copied over, then it goes to Changelog Crawl status and leaves the rest of the data missing.
21:55 JoeJulian Any clues in the logs?
21:58 lbarfield Not that I've found.  Master just shows the changelog crawls with no errors.  Slave doesn't appear to have errors either.
21:58 lbarfield I'm guessing it's something do to with the xtime/stime xattr stuff, but I'm not sure how that could be screwed up on a first time setup, or how to go about fixing it without breaking all the things.
22:00 rwheeler joined #gluster
22:05 vincent_vdk joined #gluster
22:06 xMopxShell joined #gluster
22:06 mikemol joined #gluster
22:07 DV joined #gluster
22:20 arthurh joined #gluster
22:30 cholcombe the gluster volume number of bricks math.  Does that goes distribute x replicate x stripe?  I used to have this written down somewhere but lost it
22:41 Mr_Psmith joined #gluster
22:42 fala joined #gluster
22:44 plarsen joined #gluster
22:51 * JoeJulian shudders at the mention of stripe.
22:54 JoeJulian cholcombe: yes, they go in the order the translators are used (from the perspective of the client). distribute translator has replica subvolumes. replica translator has stripe (or disburse) subvolumes.
22:55 cholcombe JoeJulian, ah ok i didn't realize it was in translator order
22:55 JoeJulian I hadn't even thought about it until you asked. :)
22:56 cholcombe it's weird right?
22:56 JoeJulian That someone's using the stripe translator, yes.
22:56 cholcombe oh no just the 1 x 2 = 2 line
22:57 cholcombe i think it could be presented more clearly
22:57 cholcombe i'm not sure how off hand
22:58 JoeJulian Only a little. At least they're not representing it with tens boxes and core math.
22:58 gildub joined #gluster
22:58 cholcombe true..
22:58 cholcombe that'd be much worse
23:00 poornimag joined #gluster
23:14 edwardm61 joined #gluster
23:24 zhangjn joined #gluster
23:55 suliba joined #gluster
23:59 zhangjn joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary