Camelia, the Perl 6 bug

IRC log for #gluster, 2013-06-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 _pol joined #gluster
00:24 root____5 joined #gluster
00:38 brunoleon_ joined #gluster
01:15 xavih joined #gluster
01:17 majeff joined #gluster
01:44 recidive joined #gluster
02:05 harish joined #gluster
02:45 majeff joined #gluster
03:05 majeff joined #gluster
03:07 vpshastry joined #gluster
03:18 mohankumar__ joined #gluster
03:22 sgowda joined #gluster
03:35 mjrosenb joined #gluster
03:35 charlescooke__ joined #gluster
03:39 cicero joined #gluster
03:39 Ramereth|home joined #gluster
03:44 eryc joined #gluster
03:45 hchiramm_ joined #gluster
03:58 hjmangalam1 joined #gluster
04:17 majeff joined #gluster
04:22 majeff joined #gluster
04:23 _pol joined #gluster
04:23 majeff joined #gluster
04:28 anands joined #gluster
04:31 _pol joined #gluster
04:42 psharma joined #gluster
04:52 shylesh joined #gluster
04:54 rotbeard joined #gluster
05:09 saurabh joined #gluster
05:13 vpshastry joined #gluster
05:26 hagarth joined #gluster
05:28 lalatenduM joined #gluster
05:29 lala_ joined #gluster
05:31 vpshastry joined #gluster
05:50 satheesh joined #gluster
05:53 bala joined #gluster
05:59 majeff joined #gluster
06:00 thekev joined #gluster
06:08 satheesh joined #gluster
06:09 vimal joined #gluster
06:12 raghu joined #gluster
06:18 bulde joined #gluster
06:19 bulde glusterbot: @stats
06:21 jtux joined #gluster
06:29 hchiramm_ joined #gluster
06:30 ollivera joined #gluster
06:34 aravindavk joined #gluster
06:46 bulde joined #gluster
06:51 hchiramm_ joined #gluster
06:54 ekuric joined #gluster
06:57 hchiramm_ joined #gluster
07:02 guigui1 joined #gluster
07:05 ctria joined #gluster
07:15 vshankar joined #gluster
07:15 majeff joined #gluster
07:15 hybrid512 joined #gluster
07:18 ujjain joined #gluster
07:20 manik joined #gluster
07:39 brunoleon joined #gluster
07:41 ProT-0-TypE joined #gluster
07:45 hybrid5123 joined #gluster
07:47 dobber_ joined #gluster
07:47 ndevos @channelstats
07:47 glusterbot ndevos: On #gluster there have been 136881 messages, containing 5852696 characters, 982213 words, 4024 smileys, and 506 frowns; 849 of those messages were ACTIONs. There have been 50995 joins, 1605 parts, 49441 quits, 19 kicks, 141 mode changes, and 5 topic changes. There are currently 187 users and the channel has peaked at 217 users.
07:47 ndevos bulde: ^
07:54 StarBeast joined #gluster
08:06 manik joined #gluster
08:08 balunasj|mtg joined #gluster
08:11 Norky joined #gluster
08:14 bulde ndevos++
08:14 bulde :p
08:15 ndevos hehe
08:16 dobber_ joined #gluster
08:17 ricky-ticky joined #gluster
08:34 lanning joined #gluster
08:47 rb2k joined #gluster
08:51 Airbear joined #gluster
08:58 rb2k joined #gluster
09:09 majeff joined #gluster
09:14 Guest44953 left #gluster
09:21 Uzix_BNC joined #gluster
09:24 the-me_ joined #gluster
09:30 eryc joined #gluster
09:30 morse joined #gluster
09:30 mynameisbruce joined #gluster
09:30 chlunde_ joined #gluster
09:30 ehg joined #gluster
09:30 awheeler joined #gluster
09:30 swaT30 joined #gluster
09:30 DWSR joined #gluster
09:30 avati joined #gluster
09:30 NeonLicht joined #gluster
09:31 benpi joined #gluster
09:32 StarBeast joined #gluster
09:50 lbalbalba joined #gluster
10:09 majeff joined #gluster
10:12 spider_fingers joined #gluster
10:14 StarBeast joined #gluster
10:18 badone_ joined #gluster
10:33 badone_ joined #gluster
10:33 avati_ joined #gluster
10:34 kke hmm
10:34 kke attachments/2013/5/29/8/c/8c082108-1e9a-4d23​-9b50-6dfa788424e6/D154322261.154322374.pdf: writable, regular file, no read permission
10:34 kke hangs the application server when trying to access the file
10:34 kke there might be some other files too
10:34 kke because our service is currently hanging up all the time
10:35 kke -rw-rw-r-- 1 root root 14354 2013-05-29 10:03 attachments/2013/5/29/8/c/8c082108-1e9a-4d23​-9b50-6dfa788424e6/D154322261.154322374.pdf
10:35 kke i could read it through the brick mount directly
10:35 kke but not through gluster
10:45 eryc joined #gluster
10:45 morse joined #gluster
10:45 mynameisbruce joined #gluster
10:45 chlunde_ joined #gluster
10:45 ehg joined #gluster
10:45 awheeler joined #gluster
10:45 swaT30 joined #gluster
10:45 DWSR joined #gluster
10:45 NeonLicht joined #gluster
10:47 swaT30 joined #gluster
10:47 edward1 joined #gluster
10:52 edoceo joined #gluster
10:52 NeonLicht joined #gluster
10:53 harish joined #gluster
11:03 thekev joined #gluster
11:03 lpabon joined #gluster
11:18 duerF joined #gluster
11:19 hchiramm_ joined #gluster
11:42 manik joined #gluster
11:43 manik1 joined #gluster
11:49 kke i did a remove-brick, how can i monitor its status?
11:49 kke or pause it or something
11:49 kke because fix-layout now says replace-brick in process
11:50 spider_fingers joined #gluster
11:50 manik joined #gluster
11:51 balunasj joined #gluster
11:53 datapulse left #gluster
11:56 charlescooke_ joined #gluster
12:03 hagarth joined #gluster
12:04 ProT-0-TypE joined #gluster
12:16 chirino joined #gluster
12:24 aliguori joined #gluster
12:30 hagarth joined #gluster
12:31 mooperd joined #gluster
12:34 eryc joined #gluster
12:34 morse joined #gluster
12:34 mynameisbruce joined #gluster
12:34 chlunde_ joined #gluster
12:34 ehg joined #gluster
12:34 awheeler joined #gluster
12:34 DWSR joined #gluster
12:38 bennyturns joined #gluster
12:45 majeff joined #gluster
12:46 jbourke joined #gluster
12:50 guigui3 joined #gluster
13:04 ccha3 joined #gluster
13:06 JordanHackworth joined #gluster
13:10 rob__ joined #gluster
13:11 kevein joined #gluster
13:11 Staples84 joined #gluster
13:13 arusso_znc joined #gluster
13:14 arusso- joined #gluster
13:14 joelwallis joined #gluster
13:14 rob__ joined #gluster
13:16 chirino_m joined #gluster
13:16 ProT-O-TypE joined #gluster
13:17 sysconfi- joined #gluster
13:18 lpabon joined #gluster
13:19 thekev joined #gluster
13:19 sysconfig joined #gluster
13:24 bfoster_ joined #gluster
13:24 ndevos_ joined #gluster
13:25 sw__ joined #gluster
13:26 soukihei_ joined #gluster
13:26 haakon_ joined #gluster
13:29 samppah_ joined #gluster
13:31 codex joined #gluster
13:32 bdperkin joined #gluster
13:34 DataBeaver joined #gluster
13:34 stigchri1tian joined #gluster
13:34 partner_ joined #gluster
13:37 vpshastry1 joined #gluster
13:37 ccha3 joined #gluster
13:37 kkeithley joined #gluster
13:37 snarkyboojum_ joined #gluster
13:37 nightwalk joined #gluster
13:37 dxd828 joined #gluster
13:37 atrius joined #gluster
13:37 isomorphic joined #gluster
13:37 SteveCooling joined #gluster
13:39 atoponce joined #gluster
13:41 abyss^ joined #gluster
13:41 NuxRo joined #gluster
13:41 xymox joined #gluster
13:42 coredumb joined #gluster
13:42 lyang0 joined #gluster
13:44 phox joined #gluster
13:44 rwheeler joined #gluster
13:45 puebele joined #gluster
13:46 recidive joined #gluster
13:48 bfoster_ joined #gluster
13:48 ndevos_ joined #gluster
13:48 bdperkin joined #gluster
13:48 vpshastry1 joined #gluster
13:48 rwheeler joined #gluster
13:48 puebele joined #gluster
13:49 guigui1 joined #gluster
13:49 atoponce joined #gluster
13:50 manik joined #gluster
13:51 plarsen joined #gluster
13:52 badone_ joined #gluster
13:54 silajim joined #gluster
13:55 waldner_ joined #gluster
13:55 waldner_ joined #gluster
13:59 portante joined #gluster
14:00 mohankumar__ joined #gluster
14:01 wushudoin| joined #gluster
14:02 StarBeast joined #gluster
14:04 hjmangalam joined #gluster
14:06 salamij92 joined #gluster
14:07 salamij92 joined #gluster
14:07 neofob joined #gluster
14:08 lalatenduM joined #gluster
14:16 portante joined #gluster
14:16 bdperkin joined #gluster
14:17 lpabon joined #gluster
14:18 dustint joined #gluster
14:23 kaptk2 joined #gluster
14:25 the-me joined #gluster
14:26 avati joined #gluster
14:27 mriv_ joined #gluster
14:28 theron__ joined #gluster
14:29 mohankumar joined #gluster
14:29 hagarth1 joined #gluster
14:29 edward2 joined #gluster
14:30 soukihei joined #gluster
14:30 rwheeler joined #gluster
14:30 rwheeler joined #gluster
14:30 lanning_ joined #gluster
14:30 vimal joined #gluster
14:34 social_ joined #gluster
14:34 social_ Hello, can someone give me quick definition of situations when I get Stale NFS file handle on glusterfs?
14:35 bugs_ joined #gluster
14:37 vimal joined #gluster
14:42 salamij92 hello, it's anyone here that can help me?
14:46 salamij92 hello? is anyone looking at the chat?
14:47 failshell joined #gluster
14:48 jbrooks joined #gluster
14:48 edward2 joined #gluster
14:48 mohankumar joined #gluster
14:48 the-me joined #gluster
14:48 plarsen joined #gluster
14:48 guigui1 joined #gluster
14:48 puebele joined #gluster
14:48 lyang0 joined #gluster
14:48 coredumb joined #gluster
14:48 eryc joined #gluster
14:48 morse joined #gluster
14:48 mynameisbruce joined #gluster
14:48 chlunde_ joined #gluster
14:48 ehg joined #gluster
14:48 awheeler joined #gluster
14:48 DWSR joined #gluster
14:49 harish joined #gluster
14:50 lh joined #gluster
14:50 lh joined #gluster
14:50 mohankumar joined #gluster
14:51 salamij92 hello, it's anyone here that can help me?
14:53 al joined #gluster
14:54 Norky I have one node (of four) on which the glusterfs  process that (I believe) handles NFS access is quickly growing to a large size and consuming all available memory. It is eating 32GiB of RAM and going into swap within an hour of being rebooted. Repeatedly.
14:54 rwheeler joined #gluster
14:55 salamij92 I think that no one is looking at this...
14:55 Norky I don't currently use NFS to access the volumes, only FUSE, and SMB (via the RHSS hooks which start a samba share for each gluster volume)
14:56 Norky salamij92, it's IRC, just ask your question and wait. Many of the developers or more knowledgable are asleep at this hour
14:57 salamij92 it depends in what time zone you are ;)
14:58 Norky indeed.
14:58 Norky also, don't ask to ask, just ask
14:59 Norky explain your question/issue clearly and concisely. If someone can help, they will.
14:59 salamij92 thanks for the advise
15:00 dbruhn__ salamij92, what's the issue you need help with?
15:00 balunasj joined #gluster
15:00 gmcwhistler joined #gluster
15:00 wN joined #gluster
15:00 rosco joined #gluster
15:01 _pol joined #gluster
15:02 salamij92 dbruhn__ I create a stripe 4 volume with 32 bricks, evrithing goes fine gut when i do a "dir" in the mount point i get stucked
15:03 salamij92 *it gets stucked
15:03 salamij92 with 28 bricks i have no problems
15:04 rwheeler joined #gluster
15:04 semiosis salamij92: ,,(pasteinfo)
15:05 glusterbot salamij92: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
15:05 js_ i have a replicate volume and every client have mounted from the "www-1" node, can www-1 go down without everything messing up?
15:06 semiosis js_: should be able to, but you should try it (try every failure scenario you can think of)
15:06 js_ semiosis: good point
15:06 vshankar joined #gluster
15:07 rob__ joined #gluster
15:08 vimal joined #gluster
15:08 failshell in a distributed-replicated setup, can i do rolling reboots without impacting the volumes and clients? even while they write?
15:08 vpshastry joined #gluster
15:09 dbruhn__ You can end up with split brain that needs to be corrected
15:09 kkeithley Norky: what version, and if you don't use NFS, why not disable it with `gluster volume set $volname nfs.disable on`
15:09 failshell so its better to make sure no clients writes then do the rolling reboots?
15:09 semiosis failshell: should be able to, but you should try it (try every failure scenario you can think of)
15:10 Norky kkeithley, RHS latest version (v2.1?): glusterfs-3.3.0.7rhs-1.el6rhs.x86_64
15:10 failshell crazy idea, the documentation should list all required open ports for geo-replication
15:10 semiosis dbruhn__: split brain can be avoided (with quorum, for one)
15:10 failshell hopefully 15 nodes out of 16 up should be quorum : )
15:10 dbruhn__ semiosis: Has quorum been released in a GA release yet?
15:11 semiosis dbruhn__: since 3.3.0
15:11 semiosis iirc
15:11 Norky and I have just tried disabling nfs. The problem appeared to get better, but now it has just happened again (no, I don't believe what I'm seeing either), so I've killed the 'errant' process
15:11 failshell i really need to upgrade to 3.3.x
15:12 semiosis failshell: you need to enable quorum, set cluster.quroum-type auto
15:12 semiosis and you need a replica count >= 3 otherwise one brick going down will cause volume to go read-only
15:13 codex joined #gluster
15:13 failshell hmm i setup my volumes with 2 replicas
15:13 semiosis but even without that you may not get split-brain, depends on your architecture & characteristics of your workload, so you should test all failure scenarios...
15:13 semiosis see ,,(split-brain)
15:13 glusterbot (#1) To heal split-brain in 3.3, see http://goo.gl/FPFUX ., or (#2) learn how to cause split-brain here: http://goo.gl/Oi3AA
15:13 kkeithley disabling nfs didn't make the glusterfs process go away? I think it should have. At least it did in the community 3.4.0alpha I currently have installed. Hmmm.
15:15 Norky kkeithley, yeah, I'm surprised, possibly I'm misunderstanding something, but that is how it appears to me
15:15 Norky http://dpaste.com/hold/1212242/
15:15 glusterbot Title: dpaste: #1212242: Gluster ate my RAM, by Norky (at dpaste.com)
15:16 StarBeast joined #gluster
15:16 Norky memory use from Ganglia https://www.dropbox.com/s/etj7r03m3if​3kbm/memory%20use%20on%20lnasilo0.png
15:16 glusterbot <http://goo.gl/JBk6P> (at www.dropbox.com)
15:17 kkeithley Since you have RHS, you should open a support ticket. I can't imagine why the NFS server would use CPU if you have no NFS clients
15:18 Norky I'm going to. I wanted to check if it was as known problem (possibly caused by user misconfiguration)
15:18 Norky thank you though.
15:19 kkeithley yeah, sorry, I don't pay as close attention to rhs. Our gss people should have a better idea of the state of bugs in rhs.
15:20 Norky no worries :)
15:23 salamij92 someone asked about the "log" http://ur1.ca/e7192
15:23 glusterbot Title: #16720 Fedora Project Pastebin (at ur1.ca)
15:23 Norky it's not so much the CPU, the machine has 8 (real) cores otherwise doing very little, it's fact that it's growing to force the machine into swap which is causing us problems
15:23 wushudoin| left #gluster
15:23 salamij92 tha machine has 8 cpu :P
15:24 salamij92 but the 7 of them are at full (i am running bitcoin mining)
15:24 Norky actually, 6 cores, 12 if you count hyper-threaing
15:24 Norky ahh, sorry, you're talking about your own machine, salamij92, I'll shut up
15:25 salamij92 haha, don't worry
15:27 salamij92 dbruhn__: are you here?
15:28 dbruhn__ I am
15:28 semiosis salamij92: do you really need ,,(stripe)?  can you live without it?
15:28 glusterbot salamij92: Please see http://goo.gl/5ohqd about stripe volumes.
15:29 jthorne joined #gluster
15:29 Skunnyk joined #gluster
15:29 yosafbridge joined #gluster
15:30 salamij92 yes i need stripe, i am going to use this volume to put torrents on it, and the machines are over the internet and not localy, but with a greate internet connection
15:30 semiosis i dont think you need stripe :)
15:31 semiosis that's not a convincing reason
15:31 salamij92 and without stripe the problem is the same
15:31 semiosis now that's interesting
15:31 semiosis are any of your bricks ext4?
15:32 salamij92 w7
15:32 salamij92 w8
15:32 semiosis ?
15:33 dbruhn__ Salami, what is the purpose of the stripe? Wouldn't you rather have replication to have the copies locally? Striping is really only good for if you have a single file that will grow beyond  a single bricks storage capacity
15:33 semiosis inability to list directory contents is a symptom of the ,,(ext4) bug, perhaps you have that
15:33 glusterbot (#1) Read about the ext4 problem at http://goo.gl/xPEYQ or (#2) Track the ext4 bugzilla report at http://goo.gl/CO1VZ
15:33 Norky argh, gluster has restarted "/usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid"
15:34 * Norky kills it again
15:35 * Norky has become the meatspace oom-killer for this misbehaving machine
15:35 devoid joined #gluster
15:37 ofu__ joined #gluster
15:39 failshel_ joined #gluster
15:39 silajim joined #gluster
15:40 jim`` joined #gluster
15:40 silajim i am back :P salamij92 here
15:40 hchiramm_ joined #gluster
15:40 jack_ joined #gluster
15:40 salamij92 i do stripe becuase in "normal" mode the upload it between 10-15 MB/s and in stripe 4 20-30 MB/s
15:41 harish joined #gluster
15:41 waldner joined #gluster
15:41 waldner joined #gluster
15:41 yosafbridge joined #gluster
15:41 silajim i've been out for 3-4
15:41 silajim minutes
15:42 rob__ joined #gluster
15:42 RobertLaptop joined #gluster
15:42 Kins joined #gluster
15:43 hjmangalam joined #gluster
15:43 mohankumar__ joined #gluster
15:44 ofu joined #gluster
15:44 lanning_ joined #gluster
15:44 social__ joined #gluster
15:44 helloadam joined #gluster
15:44 stigchristian joined #gluster
15:45 xymox joined #gluster
15:49 silajim joined #gluster
15:50 mohankumar__ joined #gluster
15:52 ofu__ joined #gluster
15:52 neofob1 joined #gluster
15:52 silajim joined #gluster
15:52 thekev` joined #gluster
15:53 silajim dbruhn__: ??
15:53 portante_ joined #gluster
15:53 aravindavk joined #gluster
15:53 kkeithley joined #gluster
15:53 lanning joined #gluster
15:53 wN joined #gluster
15:54 portante_ joined #gluster
15:54 kkeithley joined #gluster
15:54 wN joined #gluster
15:54 wN joined #gluster
15:55 nueces joined #gluster
15:56 StarBeast joined #gluster
15:56 helloadam joined #gluster
15:56 mohankumar__ joined #gluster
15:56 masterzen_ joined #gluster
15:57 xymox joined #gluster
15:57 atrius joined #gluster
15:57 arusso joined #gluster
15:58 salamij92 what filesystem is the more apropiate for gluster?
15:58 _pol joined #gluster
15:59 al joined #gluster
16:00 mriv joined #gluster
16:01 hchiramm_ joined #gluster
16:04 duerF joined #gluster
16:09 Norky XFS is the normal recommendation
16:10 Norky peopel have used ext4, zfs and I beliee others with success, but the easy answer is xfs
16:11 rosco joined #gluster
16:11 salamij92 the brick are ext4, i think i will try to migrate it
16:12 lh joined #gluster
16:12 dxd828 joined #gluster
16:12 Norky if your kernel is affecting by that struct change, then yes, that's a good idea
16:12 rob__ joined #gluster
16:13 salamij92 not all the bricks have the same kernel, this affects the gluster
16:13 salamij92 ?
16:13 DataBeaver joined #gluster
16:13 devoid joined #gluster
16:13 zwu joined #gluster
16:13 Technicool joined #gluster
16:14 partner joined #gluster
16:16 vpshastry joined #gluster
16:24 Norky well if some have the ext2/3/4 structure change and some don't, you'll see different behaviour
16:25 jclift_ joined #gluster
16:25 Norky I would suggest having all gluster machines in a cluster as homogenous as possible
16:25 dbruhn__ EXT4 is not recommended last time I checked. XFS is the recommended underlying file system
16:27 salamij92 i will upgare the kernel in all 32 machines, the only posibility to have xfs, for me, is migrating the filesystem, otherwise i can not
16:27 Norky what do you mean?
16:28 Norky actually, not to worry, I must go
16:28 Norky good luck :)
16:30 salamij92 i mean that i don't have the power to change the fs during the os install
16:36 manik joined #gluster
16:41 majeff joined #gluster
16:42 Mo_ joined #gluster
16:42 majeff1 joined #gluster
16:44 bennyturns joined #gluster
16:44 manik joined #gluster
16:50 _pol joined #gluster
16:51 _pol joined #gluster
16:55 mtanner_ joined #gluster
16:56 sjoeboo_ joined #gluster
16:57 H___ joined #gluster
16:57 war|chil1 joined #gluster
17:00 kspaans_ joined #gluster
17:00 ingard joined #gluster
17:00 GLHMarmo1 joined #gluster
17:00 MinhP joined #gluster
17:04 zykure_ joined #gluster
17:07 H__ joined #gluster
17:08 saurabh joined #gluster
17:08 _pol joined #gluster
17:08 bennyturns joined #gluster
17:08 dxd828 joined #gluster
17:08 mriv joined #gluster
17:08 xymox joined #gluster
17:08 helloadam joined #gluster
17:08 wN joined #gluster
17:08 lanning joined #gluster
17:08 kkeithley joined #gluster
17:08 failshel_ joined #gluster
17:08 rwheeler joined #gluster
17:08 hagarth1 joined #gluster
17:08 Guest65992 joined #gluster
17:08 SteveCooling joined #gluster
17:08 snarkyboojum_ joined #gluster
17:08 ccha3 joined #gluster
17:08 foster joined #gluster
17:08 JoeJulian joined #gluster
17:08 Rhomber joined #gluster
17:08 paratai joined #gluster
17:08 frakt joined #gluster
17:08 jcastle joined #gluster
17:08 stopbit joined #gluster
17:08 kbsingh joined #gluster
17:08 shanks joined #gluster
17:08 x4rlos joined #gluster
17:08 Gugge joined #gluster
17:08 purpleidea joined #gluster
17:08 semiosis joined #gluster
17:08 wgao joined #gluster
17:08 dblack joined #gluster
17:08 roo9 joined #gluster
17:08 jds2001 joined #gluster
17:08 hagarth__ joined #gluster
17:12 bulde joined #gluster
17:12 georgeh|workstat joined #gluster
17:14 jiffe98 joined #gluster
17:15 Chr1z joined #gluster
17:18 Chr1z Ok.. I have created a simple 100gb volume between 2 servers using something like this: gluster volume create myvol replica 2 transport tcp server1:/data server2:/data -- Now.. if I want to add a 3rd server for load balancing purposes (don't need the extra space) -- How is that done?  Also, what is it that I need to enable to avoid split-brain and how is that done?
17:24 failshel_ Chr1z: you can't. since your volume is configured with 2 replicas, you will need to add a pair
17:24 failshell its quite simple
17:25 failshell 1. gluster peer probe newhost 2. gluster volume add-brick volume newhost:/some/path 3. gluster volume rebalance volume start
17:28 NuxRo joined #gluster
17:28 semiosis failshell: WAT?
17:30 semiosis Chr1z: sounds like you want to increase to replica 3?  that could help with load balancing of reads, but it will cost in write performance
17:33 failshell semiosis: my volumes are configured with 2 replicas, and i can't add a single machine. has to be done in pairs
17:34 semiosis yeah but your "quite simple" instructions are way off
17:34 failshell please enligthen me
17:34 failshell im by no means an expert
17:34 aravindavk joined #gluster
17:34 failshell that's what i do when i grow a volume
17:35 semiosis oh i see, then not *way* off, just missing the second brick
17:35 nightwalk joined #gluster
17:36 failshell well, yeah that was implied
17:36 failshell i never give cut'n'paste instructions
17:36 failshell :)
17:36 semiosis hahaha :)
17:36 failshell people have to figure out stuff on their own
17:36 semiosis YMMV instructions
17:36 failshell this cluster of mine keeps growing
17:36 failshell i just keep adding bricks
17:37 failshell im at 16 VMs now. i wonder if i just start adding more bricks per VMs
17:39 Guest7684 hm.  anyone have any thoughts on the idea of a "supermount" option for gluster, i.e. mount whether or not the filesystem is available, and if it's not, behave exactly as if the server *became* unavailable?
17:39 phox gg freenode
17:41 majeff joined #gluster
17:41 awheeler_ joined #gluster
17:47 Chr1z so I need to add them in pairs or no?
17:47 failshell isnt there a web interface for gluster? i think i saw that once
17:47 failshell Chr1z: yes
17:47 failshell but if you follow my instructions, they need to be done for each added nodes
17:48 Chr1z how do I add say 2 more w/o hurting write performance?
17:55 Chr1z from a client… if I mount server1:/data and server1 goes down.. does that mount die also or reconnect to server2 after some sort of timeout setting?
18:00 Chr1z ok so it does fail over… cool...
18:00 dbruhn__ failshell: http://www.ovirt.org/images/8/84/OVirt-Gluster.pdf
18:00 glusterbot <http://goo.gl/b66Qk> (at www.ovirt.org)
18:02 dbruhn__ Gluster had it's own UI many versions ago before the red hat buyout, but it was built around the old method of managing the systems instead of the gluster commands.
18:07 semiosis ~mount server | Chr1z
18:07 glusterbot Chr1z: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
18:07 semiosis failshell: ,,(gmc)
18:07 glusterbot failshell: The Gluster Management Console (GMC) has been discontinued. If you need a pretty gui to manage storage, support for GlusterFS is in oVirt.
18:11 dbruhn__ semiosis: that bot link is dead
18:12 semiosis @split-brain
18:12 glusterbot semiosis: (#1) To heal split-brain in 3.3, see http://goo.gl/FPFUX ., or (#2) learn how to cause split-brain here: http://goo.gl/Oi3AA
18:12 semiosis glusterbot: meh
18:12 glusterbot semiosis: I'm not happy about it either
18:12 semiosis i thought the helpfist pages just moved, but now it seems even those new links are dead
18:12 semiosis what a bummer
18:13 semiosis johnmark: know anything about a static copy of the helpshift pages?
18:13 semiosis bbiab
18:13 Chr1z What can be done to prevent split brain?  wasn't there something that could be enabled?  for example if I mount server1:/data and then reboot server1… but create a text file (which is stored on server2)… when server1 comes back up that file is not there even after the client access the file on the mount
18:13 rwheeler joined #gluster
18:13 semiosis Chr1z: that doesn't sound like split brain
18:14 semiosis bbiab
18:14 semiosis afk
18:16 dustint joined #gluster
18:19 kkeithley johnmark: any idea about the dead links on community.gluster.org? Is it sick?
18:22 Chr1z how is quorum enabled?
18:22 RobertLaptop joined #gluster
18:27 Chr1z left #gluster
18:28 rob__ joined #gluster
18:35 partner joined #gluster
18:37 y4m4 joined #gluster
18:46 partner joined #gluster
18:49 tziOm joined #gluster
18:52 vimal joined #gluster
19:08 zaitcev joined #gluster
19:13 partner joined #gluster
19:15 balunasj joined #gluster
19:15 balunasj joined #gluster
19:17 ehg joined #gluster
19:22 tg2 quick question, if I have 2 storage nodes, storage1 and storage2, each with 2 bricks
19:23 tg2 if I remove the bricks from storage2 will clients just use storage1?
19:23 tg2 can I take storage2 offlin eonce both bricks have been removed?
19:23 yosafbridge` joined #gluster
19:24 nueces joined #gluster
19:24 waldner_ joined #gluster
19:24 waldner_ joined #gluster
19:24 atrius_ joined #gluster
19:30 masterzen joined #gluster
19:30 tg2 seems like I did a remove-brick start on one
19:30 tg2 and the clietns are still putting files on it
19:30 tg2 once the remove-brick was completed, it had 50G of new files on it
19:31 tg2 I thought if a brick was in "removal" stauts, that clients wouldn't put new files on it
19:31 tg2 while it was offloading its cache
19:33 awheeler_ joined #gluster
19:39 Chocobo left #gluster
19:44 rob__ joined #gluster
19:44 JoeJulian tg2: I /think/ the behind-the-scenes the code to migrate data off the brick uses replicate. If I'm right, those new files should also exist on the not-removed bricks.
19:45 tg2 yeah it replicates then removes
19:46 tg2 what I don't get is how a brick can continue to receive new files while it's doing a remove-brick operation
19:46 tg2 to me, there should be a flag on that brick that it's currently not accepting new files
19:46 tg2 the solutoin seems to be to remove the bricks from the clients configs
19:46 tg2 but then they can't read the files that are on that brick, until it's been offloaded.
19:47 tg2 and around you go.
19:50 tg2 so what's the procedure for removing a brick without interrupting file operations on the clients
19:53 _pol_ joined #gluster
19:55 al joined #gluster
19:56 _pol joined #gluster
20:07 tziOm joined #gluster
20:10 tziOm joined #gluster
20:12 Mo__ joined #gluster
20:15 andreask joined #gluster
20:20 ingard hi guys
20:20 ingard [2013-06-05 22:13:48] W [fuse-bridge.c:493:fuse_entry_cbk] glusterfs-fuse: LOOKUP(/887) inode (ptr=0x3e1c090, ino=3092376650278, gen=5881526961011951847) found conflict (ptr=0x7f4ce41910e0, ino=3092376650278, gen=5881526961011951847)
20:20 ingard [2013-06-05 22:13:48] W [fuse-bridge.c:493:fuse_entry_cbk] glusterfs-fuse: LOOKUP(/887/bakgei) inode (ptr=0x2aa23b0, ino=1857716703220, gen=0) found conflict (ptr=0x7f4ce4745d30, ino=1857716703220, gen=0)
20:20 ingard [2013-06-05 22:13:52] W [fuse-bridge.c:493:fuse_entry_cbk] glusterfs-fuse: LOOKUP(/887) inode (ptr=0x3d40890, ino=3092376650278, gen=5881526961011951847) found conflict (ptr=0x7f4ce41910e0, ino=3092376650278, gen=5881526961011951847)
20:20 ingard [2013-06-05 22:13:52] W [fuse-bridge.c:493:fuse_entry_cbk] glusterfs-fuse: LOOKUP(/887/bakgei) inode (ptr=0x26fc9c0, ino=1857716703220, gen=0) found conflict (ptr=0x7f4ce4745d30, ino=1857716703220, gen=0)
20:20 ingard im seeing quite a lot of these messages
20:20 ingard can anyone explain what it means and/or how I can fix it?
20:21 ingard i'm getting input/output error or it just hangs if i try to ls those dirs
20:25 y4m4 joined #gluster
20:51 nueces joined #gluster
20:54 tc00per1 joined #gluster
20:54 tc00per1 left #gluster
20:56 JoeJulian ingard: Those are just warnings, so they /shouldn't/ be causing any problems. I don't know what it means by "found conflict" though. Any errors at the brick end?
21:04 ingard JoeJulian: not from those paths
21:09 JoeJulian Perhaps it's not those paths that it's hanging on.
21:10 ingard no it could be the whole mount point i guess
21:10 ingard BUT
21:10 ingard i could ls the path/to/mount/operations
21:10 ingard which I use for ops stuff on each of the mounts
21:10 ingard the path that had the warning got me input/output error
21:10 ingard :s
21:11 social__ hi, how would one resolve Stale NFS filehandle? what are all the possible causes for that on gluster?
21:11 a2_ do you just see it in the logs, or as an error to the app?
21:11 ingard JoeJulian: i should mention that this is 3.0.5
21:12 ingard but just to clearify, those are just warnings and the paths should be accessible?
21:13 JoeJulian That should be an accurate statement, yes.
21:16 ingard JoeJulian: any idea how I could track down the problem thats giving me input/output error?
21:17 JoeJulian Since you're on 3.0 there are a lot of possibilities that I wasn't expecting. Check those files/directories extended attributes and look for inconsistencies.
21:18 JoeJulian ~extended attributes | ingard
21:18 glusterbot ingard: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
21:19 JoeJulian There are so many bugs in 3.0 that have been fixed since then, I hate to even try to start figuring out which one it might have been.
21:20 ingard right
21:21 ingard i was kinda expecting to hear that
21:21 ingard but its not an easy task for me to upgrade all clients/servers
21:21 ingard atm anyway
21:21 ingard would upgrade to the latest 3.0.* make any difference?
21:21 ingard and is it even possible to run a mixed 3.0.5 and 3.0.latest environment?
21:22 JoeJulian Maybe. IIRC the mismatched gfid bug was fixed around that time. Upgrade servers first and you can upgrade clients piecemeal.
21:22 JoeJulian Staying within 3.0 that is...
21:23 ingard the thig is
21:23 ingard each box running the clients will have 20 different gluster mountpoints
21:23 ingard so upgrading, if I have to do server NAD client is going to be a massive job
21:24 cfeller joined #gluster
21:29 portante joined #gluster
21:31 ingard JoeJulian: umount && mount solved the input/output errors
21:36 vimal joined #gluster
21:49 __Bryan__ joined #gluster
22:09 devoid1 joined #gluster
22:10 mooperd joined #gluster
22:28 rwheeler joined #gluster
22:28 _pol joined #gluster
22:31 manik joined #gluster
22:39 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
22:42 _pol joined #gluster
22:43 StarBeast joined #gluster
22:47 tg2 joined #gluster
22:50 masterzen joined #gluster
22:58 rcoup joined #gluster
23:00 rcoup morning folks. I've applied https://bugzilla.redhat.com/show_bug.cgi?id=874554 to 3.3.1 and while it seems to work for normal writes, rebalancing (via remove-brick at least) seem to attempt to add more data to full nodes.
23:00 glusterbot <http://goo.gl/xbQQC> (at bugzilla.redhat.com)
23:00 glusterbot Bug 874554: unspecified, medium, ---, rtalur, ON_QA , cluster.min-free-disk not having an effect on new files
23:00 rcoup well past the min-free-disk limit
23:01 rcoup that expected?
23:06 rcoup follow-up question. I presume failures (all I can see are from the above) during the remove-brick rebalance means I can't call remove-brick commit, but should re-start the remove-brick rebalance (how?)
23:09 glusterbot New news from newglusterbugs: [Bug 844584] logging: Stale NFS messages <http://goo.gl/z72b6>
23:09 _pol joined #gluster
23:20 dbruhn joined #gluster
23:26 tg2 i too have some issues with remove-brick
23:26 tg2 ie: it successfully offloads its files to the remaining bricks, but it still accepts new files from the volume
23:26 tg2 so after a 8Tb remove-brick, it has like 50Gb of files on it since that is roughly the growth rate
23:31 rob__ joined #gluster
23:31 rcoup tg2: did you get failures during the remove-brick process?
23:31 rcoup tg2: and how did you resolve the issue? run remove-brick start again?
23:32 rcoup my impression is that remove-brick flags the brick(s) as not-accepting-writes, then runs a rebalance to move the existing files elsewhere
23:32 rcoup but maybe i'm misunderstanding How It All Works
23:32 tg2 I thought that too
23:32 jbrooks joined #gluster
23:32 tg2 no no failures
23:33 tg2 maybe it's accepting the files after the remove-brick finishes
23:33 tg2 maybe the solution is to commit right away
23:33 tg2 i guess
23:36 tg2 I thought it was accepting files while it was doing its remove-brick
23:36 tg2 I'm also curious about what happens if I do remove-brick from 2 bricks simultaneously
23:37 tg2 will gluster know well enough not to send to the other brick?
23:37 tg2 well the answer is: Rebalance is in progress. Please retry after completion
23:37 tg2 so you can only remove 1 brick at a time
23:37 tg2 which concequently moves files onto it's sister bricks
23:38 tg2 grr
23:43 tg2 can you do multiple remove-brick actions int he same command?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary