Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 hjmangalam1 joined #gluster
00:02 xavih_ joined #gluster
00:02 xymox_ joined #gluster
00:02 verdurin joined #gluster
00:03 jag3773 joined #gluster
00:03 awheeler joined #gluster
00:03 fleducquede joined #gluster
00:03 mynameisbruce__ joined #gluster
00:03 Cenbe_ joined #gluster
00:03 georgeh|workstat joined #gluster
00:03 shawns|work joined #gluster
00:03 klaxa joined #gluster
00:03 stoile joined #gluster
00:03 neofob joined #gluster
00:03 _Bryan_ joined #gluster
00:03 RobertLaptop joined #gluster
00:03 primusinterpares joined #gluster
00:03 awheeler_ joined #gluster
00:04 eightyeight joined #gluster
00:08 stoile joined #gluster
00:09 hjmangalam1 left #gluster
00:13 RobertLaptop joined #gluster
00:13 manik joined #gluster
00:14 mriv_ joined #gluster
00:14 _Bryan_ joined #gluster
00:15 georgeh|workstat joined #gluster
00:16 sac_ joined #gluster
00:16 atrius joined #gluster
00:17 yinyin joined #gluster
00:17 primusinterpares joined #gluster
00:18 klaxa joined #gluster
00:20 shawns|work joined #gluster
00:20 kshlm|AF1 joined #gluster
00:22 hjmangalam joined #gluster
00:23 mynameisbruce__ joined #gluster
00:26 shapemaker joined #gluster
00:28 fleducquede joined #gluster
00:28 lkoranda joined #gluster
00:29 kbsingh joined #gluster
00:30 NuxRo joined #gluster
00:31 furkaboo joined #gluster
00:33 arusso joined #gluster
00:34 arusso joined #gluster
00:34 Gugge_ joined #gluster
00:36 klaxa joined #gluster
00:36 layer3switch joined #gluster
00:37 hjmangalam joined #gluster
00:37 NuxRo joined #gluster
00:38 Cenbe joined #gluster
00:39 hjmangalam1 joined #gluster
00:40 cyberbootje joined #gluster
00:41 Norky joined #gluster
00:41 JoeJulian joined #gluster
00:41 arusso joined #gluster
00:42 _br_- joined #gluster
00:42 tryggvil__ joined #gluster
00:42 52AAAPGAV joined #gluster
00:42 gmcwhistler joined #gluster
00:42 neofob joined #gluster
00:43 _br_ joined #gluster
00:45 awheeler_ joined #gluster
00:45 awheeler joined #gluster
00:45 mynameisbruce__ joined #gluster
00:45 georgeh|workstat joined #gluster
00:45 DigiDaz joined #gluster
00:46 Nuxr0 joined #gluster
00:48 nixpanic_ joined #gluster
00:49 nixpanic_ joined #gluster
00:50 shawns|work joined #gluster
00:50 Norky joined #gluster
00:50 arusso joined #gluster
00:50 arusso joined #gluster
00:51 primusinterpares joined #gluster
00:52 jdarcy joined #gluster
01:00 yinyin joined #gluster
01:05 jules_ joined #gluster
01:07 zwu joined #gluster
01:07 Norky_ joined #gluster
01:07 arusso joined #gluster
01:07 shawns|work joined #gluster
01:07 Skunnyk joined #gluster
01:10 jiffe98 joined #gluster
01:12 Guest77353 joined #gluster
01:12 DWSR joined #gluster
01:19 Skunnyk joined #gluster
01:19 robos joined #gluster
01:26 xavih joined #gluster
01:29 kevein joined #gluster
01:31 Skunnyk joined #gluster
01:38 verdurin joined #gluster
01:38 Guest77353 joined #gluster
01:38 DWSR joined #gluster
01:38 xymox joined #gluster
01:41 Skunnyk joined #gluster
01:45 Guest77353 joined #gluster
01:45 DWSR joined #gluster
01:47 primusinterpares joined #gluster
01:47 robos joined #gluster
01:47 tryggvil__ joined #gluster
01:47 gmcwhistler joined #gluster
01:49 DigiDaz_ joined #gluster
01:50 gmcwhistler joined #gluster
01:50 Skunnyk joined #gluster
01:50 robos joined #gluster
01:51 jiffe99 joined #gluster
01:52 dendazen hi
01:52 glusterbot dendazen: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
01:53 dendazen i did apt-get upgrade glusterfs and it seems it upgraded to 3.3
01:53 dendazen but 'dpkg -s' still shows 2.5 version
01:53 dendazen of glusterfs-server package
02:01 robos joined #gluster
02:03 gmcwhistler joined #gluster
02:05 zwu joined #gluster
02:09 verdurin_ joined #gluster
02:09 arusso joined #gluster
02:09 robos joined #gluster
02:10 arusso joined #gluster
02:10 Skunnyk joined #gluster
02:11 hjmangalam1 joined #gluster
02:12 hjmangalam left #gluster
02:13 jules_ joined #gluster
02:13 Guest77353 joined #gluster
02:13 DWSR joined #gluster
02:20 DigiDaz joined #gluster
02:22 hjmangalam1 joined #gluster
02:24 Skunnyk joined #gluster
02:25 arusso joined #gluster
02:29 DigiDaz_ joined #gluster
02:33 aravindavk joined #gluster
02:34 Norky_ joined #gluster
02:34 Skunnyk joined #gluster
02:34 arusso joined #gluster
02:34 tryggvil__ joined #gluster
02:34 arusso joined #gluster
02:35 Guest77353 joined #gluster
02:35 DWSR joined #gluster
02:45 stoile joined #gluster
02:45 redsolar joined #gluster
02:46 aravindavk joined #gluster
02:46 soukihei_ joined #gluster
02:47 inodb_ joined #gluster
02:47 a2 joined #gluster
02:48 Humble joined #gluster
02:48 kevein joined #gluster
02:48 Norky_ joined #gluster
02:49 johndesc1 joined #gluster
02:57 joaquim__ joined #gluster
03:04 soukihei_ joined #gluster
03:05 hchiramm_ joined #gluster
03:13 aravindavk joined #gluster
03:23 bala joined #gluster
03:44 jclift joined #gluster
03:44 bharata joined #gluster
03:54 Humble joined #gluster
03:56 Skunnyk joined #gluster
04:08 shylesh joined #gluster
04:19 sgowda joined #gluster
04:21 aravindavk joined #gluster
04:21 vpshastry joined #gluster
04:22 saurabh joined #gluster
04:23 sripathi joined #gluster
04:26 robo joined #gluster
04:27 vshankar joined #gluster
04:28 Humble joined #gluster
04:34 yinyin joined #gluster
04:36 verdurin_ joined #gluster
04:48 lalatenduM joined #gluster
04:54 yinyin joined #gluster
05:28 Humble joined #gluster
05:28 yinyin joined #gluster
05:33 rastar joined #gluster
05:34 mohankumar joined #gluster
05:38 dendazen_ joined #gluster
05:40 maxiepax_ joined #gluster
05:41 kbsingh_ joined #gluster
05:41 sac__ joined #gluster
05:42 torbjorn1_ joined #gluster
05:42 Rydekull_ joined #gluster
05:43 alex88_ joined #gluster
05:44 mriv joined #gluster
05:45 tjikkun__ joined #gluster
05:46 glusterbot New news from newglusterbugs: [Bug 927109] posix complaince test fails <http://goo.gl/QDseS> || [Bug 927111] posix complaince test fails <http://goo.gl/Uy4OC> || [Bug 927112] posix compliance test fails <http://goo.gl/MBmCB>
05:46 helloadam joined #gluster
05:46 bulde joined #gluster
05:46 SteveCooling joined #gluster
05:46 jiqiren joined #gluster
05:47 raghu joined #gluster
05:48 ramkrsna joined #gluster
05:48 ramkrsna joined #gluster
05:55 H__ joined #gluster
05:59 timothy joined #gluster
06:06 junaid joined #gluster
06:12 bala joined #gluster
06:15 isomorphic joined #gluster
06:22 glusterbot New news from resolvedglusterbugs: [Bug 761733] test bug <http://goo.gl/Hl8CK>
06:24 deepakcs joined #gluster
06:25 vimal joined #gluster
06:41 rgustafs joined #gluster
06:44 satheesh joined #gluster
06:46 glusterbot New news from newglusterbugs: [Bug 924504] print the in-memory graph rather than volfile in the logfile <http://goo.gl/6rd7E>
06:52 satheesh joined #gluster
06:58 robos joined #gluster
06:58 joeto joined #gluster
07:04 satheesh1 joined #gluster
07:10 Nevan joined #gluster
07:13 hateya joined #gluster
07:19 guigui3 joined #gluster
07:22 glusterbot New news from resolvedglusterbugs: [Bug 923228] 3.4 Alpha 2 Breaks swift file posting <http://goo.gl/nec4f> || [Bug 923580] ufo: `swift-init all start` fails <http://goo.gl/F73bO>
07:26 redbeard joined #gluster
07:26 timothy joined #gluster
07:33 ctria joined #gluster
07:36 guigui3 joined #gluster
07:39 ngoswami joined #gluster
07:39 yinyin joined #gluster
07:41 yinyin joined #gluster
07:47 xavih joined #gluster
07:52 glusterbot New news from resolvedglusterbugs: [Bug 764890] Keep code more readable and clean <http://goo.gl/p7bDp>
07:57 andreask joined #gluster
08:03 tjikkun_work joined #gluster
08:03 cw joined #gluster
08:07 camel1cz joined #gluster
08:16 glusterbot New news from newglusterbugs: [Bug 927146] AFR changelog vs data ordering is not durable <http://goo.gl/jfrtO>
08:23 ctria joined #gluster
08:27 sripathi joined #gluster
08:29 camel1cz joined #gluster
08:34 shylesh_ joined #gluster
08:34 bulde1 joined #gluster
08:35 alex88_ joined #gluster
08:35 vikumar joined #gluster
08:37 hchiramm_ joined #gluster
08:39 xavih_ joined #gluster
08:39 Skunnyk joined #gluster
08:40 lala_ joined #gluster
08:40 fleducquede joined #gluster
08:40 robo joined #gluster
08:42 zwu joined #gluster
08:42 timothy joined #gluster
08:42 guigui1 joined #gluster
08:42 saurabh joined #gluster
08:43 shylesh_ joined #gluster
08:43 mriv joined #gluster
08:44 raghu` joined #gluster
08:44 ctria joined #gluster
08:44 Skunnyk joined #gluster
08:44 raghu` joined #gluster
08:44 kevein joined #gluster
08:44 robo joined #gluster
08:44 helloadam joined #gluster
08:44 stoile joined #gluster
08:44 tjikkun_work joined #gluster
08:44 Guest77353 joined #gluster
08:44 DWSR joined #gluster
08:44 sac__ joined #gluster
08:45 torbjorn__ joined #gluster
08:46 redbeard joined #gluster
08:46 Rydekull joined #gluster
08:46 bulde joined #gluster
08:46 ngoswami joined #gluster
08:53 bharata joined #gluster
08:54 samppah does geo-replication always copy the full file to the slave or only parts that have changed?
08:57 ctria joined #gluster
08:57 Skunnyk joined #gluster
08:59 samppah i have noticed that geo-replication seems to create some kind of temporary file to the slave and i assume that it will get renamed to original filename? is that correct and is there any gotchas in process?
09:00 samppah i'd like to take snapshots of geo-replication target on slave machine and i'm bit worried that is it possible there will be uncompleted files
09:02 * camel1cz is interested in recommended backup procedure too. Mainly about the data integrity
09:02 ProT-0-TypE joined #gluster
09:02 samppah yeah, other opinions about taking backups are also welcome :)
09:03 rastar1 joined #gluster
09:03 sripathi joined #gluster
09:04 vpshastry joined #gluster
09:09 manik joined #gluster
09:15 kevein joined #gluster
09:16 sgowda joined #gluster
09:18 jules_ joined #gluster
09:22 shireesh joined #gluster
09:27 ujjain joined #gluster
09:36 bulde joined #gluster
09:38 ngoswami joined #gluster
09:39 sgowda joined #gluster
09:39 satheesh joined #gluster
09:41 ngoswami_ joined #gluster
09:42 sripathi joined #gluster
09:48 ricky-ticky joined #gluster
09:48 ricky-ticky joined #gluster
09:49 dobber_ joined #gluster
09:53 al joined #gluster
09:59 camel1cz left #gluster
10:01 sripathi joined #gluster
10:07 ngoswami joined #gluster
10:08 nixpanic joined #gluster
10:08 nixpanic joined #gluster
10:12 isomorphic joined #gluster
10:14 vpshastry joined #gluster
10:14 Norky joined #gluster
10:14 yinyin joined #gluster
10:20 inodb Hi i would like to know if on a gluster distributed replicated setup
10:21 inodb there is a possibility to include fs level quotas
10:21 inodb NOT gluster quotas
10:22 inodb i mean what happen if file can't be written because
10:22 inodb of low level quotas ?
10:26 mohankumar joined #gluster
10:28 NuxRo inodb: afaik glusterfs does not support unix quotas
10:28 NuxRo the only quota it knows about is the internal one
10:30 bharata joined #gluster
10:30 sgowda joined #gluster
10:32 inodb NuxRo:thx. Imho Gluster will fail to write file if quota limit is reached
10:32 inodb but i don't totally know the implications
10:32 inodb of this in a distributed/replicated setup :/
10:33 inodb i think i will have to test ;-)
10:34 rastar joined #gluster
10:35 NuxRo please do test and let us know :)
10:35 vpshastry joined #gluster
10:36 NuxRo with unix quota you can open a file but not write to it, i expect more or less the same behaviour from glusterfs
10:36 Han joined #gluster
10:37 jiffe99 joined #gluster
11:00 variant joined #gluster
11:04 jdarcy joined #gluster
11:12 manik joined #gluster
11:14 venkatesh joined #gluster
11:16 bulde joined #gluster
11:22 Han What kind of "performance loss" can I expect when I use gluster compared to a single hd? Assuming they're both equal and connected over gigabit.
11:27 shireesh joined #gluster
11:35 bulde joined #gluster
11:36 kkeithley About the same as what you'd see if you used NFS.
11:38 kkeithley and it depends too on your workload. Gluster isn't as good as NFS is with small files.
11:39 sripathi joined #gluster
11:42 NuxRo actually kkeithley, if he does replication his speed will also be halved..
11:43 andreask joined #gluster
11:45 kkeithley With one drive/brick he won't be doing replication, if you want an apples-to-apples comparison. But yes, you're correct.
11:49 NuxRo true, Han should clarify what he wants to run since it can vary a lot
11:57 mohankumar joined #gluster
12:01 go2k joined #gluster
12:02 go2k Hello everyone
12:04 jdarcy joined #gluster
12:04 dustint joined #gluster
12:05 go2k Is there any command in glusterfs which can tell me whether all bricks are correctly and fully synchronized in glusterfs?
12:06 go2k gluster volume info shows only this: (Status: Started)
12:06 go2k but it does not tell us whether everything is alright
12:07 guigui3 joined #gluster
12:11 balunasj joined #gluster
12:12 junaid joined #gluster
12:13 bulde joined #gluster
12:16 jdarcy joined #gluster
12:24 mohankumar joined #gluster
12:25 Han It's just the replication that's interesting. Downloading shouldn't matter that much.
12:25 Staples84 joined #gluster
12:28 camel1cz joined #gluster
12:28 camel1cz left #gluster
12:30 msvbhat go2k: gluster volume status
12:33 rcheleguini joined #gluster
12:33 go2k msvbhat: it just tells you that it has started
12:36 msvbhat go2k: gluster volume status tells you the staus of bricks in the volume.
12:38 msvbhat go2k: And the status of gluster NFS  servers and self-heal daemon process.
12:39 msvbhat go2k: BTW gluster volume info and gluster volume status are two different commands.
12:39 go2k hmm... seems I will have to upgrade to 3.3.1 ;)
12:40 go2k yes, I do not have gluster volume status over here
12:42 msvbhat go2k: Oh... Okay
12:46 go2k anyway, I will not do that now as it seems gluster download site is down for a while :P
12:48 kkeithley download.gluster.org looks like it's up to me.
12:52 Han http://www.downforeveryoneorjustme.com/
12:52 glusterbot Title: Down For Everyone Or Just Me -> Check if your website is down or up? (at www.downforeveryoneorjustme.com)
12:52 bennyturns joined #gluster
12:57 NuxRo When a bricks goes down, processes using it freeze for about 1-2 minutes before continuing, is this related to some glusterfs timeout? If it rings any bells, where could I modify these timeouts? Ideally I'd like them shorter
12:59 jskinner_ joined #gluster
12:59 vshankar joined #gluster
13:00 Han what client are you using to connect?
13:01 Han man nfs has a lot of options, also regarding timeouts IIRC>
13:01 NuxRo I use the gluster client
13:01 guigui5 joined #gluster
13:02 NuxRo if i mount the volume natively and then do something like creating 1 million files, this process will freeze while a bricks goes down
13:03 aliguori joined #gluster
13:06 go2k hey man, try this:
13:06 kkeithley should be the "network.ping-timeout" setting on the volume. Default is 42 seconds, which is the default from tcp.
13:06 go2k network.ping-timeout: 10
13:06 go2k (adjust the value to whatever suits you)
13:06 go2k yeah exactly ;)
13:07 NuxRo cheers guys, will try
13:09 semiosis ,,(ping-timeout)
13:09 glusterbot The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
13:10 NuxRo meh
13:11 NuxRo they don't die frequently, but wanted to minimise the freeze time :(
13:11 NuxRo i guess I'll have to deal with the defaults if it's safer
13:11 robo joined #gluster
13:12 semiosis balance network connectivity issues with server failures
13:13 NuxRo yeah
13:16 rwheeler joined #gluster
13:18 robo joined #gluster
13:20 theron joined #gluster
13:23 sjoeboo joined #gluster
13:25 NuxRo kkeithley: can glusterfs-swift be hooked into openstack keystone? if yes, what is the relation of projcts/users and glusterfs volumes? will everything be stored on 1 single volume?
13:27 stickyboy joined #gluster
13:29 kkeithley We're planning to do that soon, but atm I can't say what that relationship is.
13:29 kkeithley No, everything won't be stored on one single volume.
13:30 NuxRo right, so maybe 1 volume per openstack user, that'd be nice. i guess we'll wait and see
13:31 shapemaker joined #gluster
13:34 manik joined #gluster
13:37 stickyboy I'm finding writes extremely slow on the FUSE client.  Just simple cp of files a few gigs in size takes very long (10MB/sec).  NFS is working on the same setup at ~60MB+.
13:38 stickyboy Are other people sticking with FUSE?
13:38 stickyboy And what workloads?
13:39 semiosis stickyboy: do you have a different kind of network between client & servers vs between servers?
13:39 semiosis fuse client writes to all replicas in sync, thus dividing throughput by number of replicas
13:39 semiosis could explain what you see
13:40 stickyboy semiosis: Server and client are on a dedicated 1GbE switch.
13:40 semiosis how many replicas in your volume?
13:40 stickyboy When writing over NFS I find the backend server syncing to the replicate server at ~900mbits.
13:41 stickyboy semiosis: Two replicas
13:42 semiosis i wonder if file metadata ops are slowing your copy down... \have you mounted your bricks with noatime,nodiratime?
13:42 stickyboy semiosis: Yeah, backend XFS is `noatime,nodiratime,inode64`
13:44 semiosis if you run multiple cp's over fuse at the same time can you get aggregate performance near to NFS speed?
13:44 stickyboy semiosis: Hmm, lemme check.
13:46 jskinner_ joined #gluster
13:52 dendazen joined #gluster
13:55 jruggiero joined #gluster
13:57 stickyboy semiosis: I started 5 copies and I see the throughput going up to ~700mbit or so, but in spurts.
14:00 robos joined #gluster
14:05 dendazen joined #gluster
14:08 portante joined #gluster
14:14 bugs_ joined #gluster
14:16 wushudoin joined #gluster
14:26 Staples84 joined #gluster
14:32 puebele joined #gluster
14:34 ricky-ticky joined #gluster
14:41 manik joined #gluster
14:45 hjmangalam joined #gluster
14:45 hjmangalam1 joined #gluster
14:47 dendazen joined #gluster
14:47 glusterbot New news from newglusterbugs: [Bug 922801] Gluster not resolving hosts with IPv6 only lookups <http://goo.gl/79JLl>
14:48 wushudoin left #gluster
14:51 plarsen joined #gluster
14:53 daMaestro joined #gluster
14:53 dendazen joined #gluster
14:56 bugs_ joined #gluster
14:57 dendazen joined #gluster
15:00 14WAAJ21L joined #gluster
15:06 jdarcy joined #gluster
15:07 zaitcev joined #gluster
15:08 hateya joined #gluster
15:10 jdarcy joined #gluster
15:10 manik joined #gluster
15:17 vpshastry joined #gluster
15:23 piotrektt_ joined #gluster
15:25 hjmangalam joined #gluster
15:25 hjmangalam1 joined #gluster
15:26 dendazen guys, which ports i need to have open for glusterfs to be working?
15:30 dendazen because i see 24007, 24009, 24010, 24011 and 38465 open
15:30 kshlm|AFK joined #gluster
15:30 kshlm|AFK joined #gluster
15:31 jruggiero joined #gluster
15:34 semiosis ~ports | dendazen
15:35 glusterbot dendazen: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
15:35 manik joined #gluster
15:35 semiosis 3.4 will change that a bit, but it's correct for 3.1 to 3.3
15:43 dendazen Thanks.
15:44 dendazen so 38468 needs to be open as well.
15:44 dendazen If i want to probe new server and join it to authorized storage pool what port needs to be altered so existing running server can probe?
15:44 dendazen in iptables
15:45 jclift joined #gluster
15:46 semiosis 24007
15:47 DWSR joined #gluster
15:47 DWSR joined #gluster
15:55 DWSR joined #gluster
15:55 DWSR joined #gluster
16:03 satheesh joined #gluster
16:14 manik joined #gluster
16:15 camel1cz joined #gluster
16:19 neofob left #gluster
16:21 manik joined #gluster
16:21 jskinner_ joined #gluster
16:24 Chiku|dc how can I check what is the default permance.cache-size value ?
16:29 semiosis gluster volume set help
16:30 jskinner joined #gluster
16:31 xavih hi, I have a weird problem with directory listing in gluster
16:31 xavih I created a new volume from 4 bricks with replica 2 just formated in XFS
16:31 ramkrsna joined #gluster
16:32 kr4d10 joined #gluster
16:32 xavih then I created a directory with 300 subdirectories inside the new volume
16:32 manik joined #gluster
16:32 kr4d10 left #gluster
16:32 xavih after clearing all caches, an ls of that directory takes about 1 second to complete
16:32 xavih then I create thounsands of other directories in the root of the volume
16:33 jskinn___ joined #gluster
16:33 xavih after clearing again all caches, the ls of the first directory (not afected by those new directories) takes 3.5 seconds
16:34 xavih this is weird, but it is worse when I delete this directory and I recreate it with the same number of subdirectories
16:34 xavih then the ls takes more than 10 seconds !!!
16:35 xavih I've repeated the test with ext4 and the behavior is similar, but with better times
16:35 jskinner joined #gluster
16:35 xavih any idea ?
16:35 manik1 joined #gluster
16:36 semiosis metadata ops like listing a large directory take time, that's normal
16:36 xavih yes, I know, but I'm listing a directory with only 300 subdirectories
16:37 jskinner_ joined #gluster
16:37 semiosis idk what you mean by "clearing [again] all caches" but sounds like you're not really clearing caches until you delete/recreate... when you see the real performance hit
16:37 semiosis only 300?  no point trying to convince me
16:37 xavih I'm listing the contents of /dirs, and the thousands of directories have been created at /
16:38 xavih the same directory listing takes different times depending on how many directories (maybe files) are in other directories
16:39 andreask joined #gluster
16:39 xavih I clear caches by unmounting the volume, stopping it and flushing kernel buffers with "echo 3 > /proc/sys/vm/drop_caches"
16:40 semiosis seems like that would do it
16:40 xavih before flushing kernel buffers I also issue a sync
16:40 xavih anyway, the last test is the worst
16:40 manik joined #gluster
16:40 xavih deleting and recreating the directory, it takes more than twice to list it
16:42 xavih it seems something related to the file system because if I don't flush kernel buffers, the time is quite good, even if I umount and restart the volume
16:43 xavih however the file system is much faster if I access it directly
16:44 manik joined #gluster
16:44 manik1 joined #gluster
16:45 camel1cz joined #gluster
16:45 manik joined #gluster
16:46 neofob joined #gluster
16:47 tjstansell joined #gluster
16:48 semiosis that's interesting, but hardly surprising
16:48 xavih what can cause this ?
16:48 semiosis are you expecting to be deleting & recreating directories often?
16:49 semiosis xavih: it's due to the glusterfs architecture, directory operations are synchronized across all bricks in the volume (not just replicas, all bricks)
16:49 xavih no, this is only a test I made to reproduce a situation where a directory listing of a directory takes from 10 to 15 seconds after hours of not having accessing it
16:50 xavih yes, but if the directory structure is the same, why it gets worse when the structure has been recreated ?
16:50 semiosis did you flush kernel buffers on all servers, or just one?
16:51 timothy joined #gluster
16:51 xavih if I create 300 subdirectories with the volume empty and then add thousands of other directories, it takes 3.5 seconds
16:51 xavih if I do it in the reverse order, it takes 10 seconds
16:51 xavih all
16:52 semiosis i dont know the answers
16:52 xavih with ext4, the first test takes less than 1 second, and the second one takes about 7 seconds
16:53 xavih it seems that it is related with the underlying file system in some way
16:53 xavih ok, thank you anyway :)
16:54 xavih I'll continue testing it...
16:57 Mo___ joined #gluster
17:00 vpshastry joined #gluster
17:06 rastar joined #gluster
17:12 jbrooks joined #gluster
17:23 andrewbogott joined #gluster
17:26 vpshastry joined #gluster
17:32 andrewbogott I could use some help with troubleshooting.  Several of my gluster volumes are suddenly reporting as read-only on their mounted systems.
17:32 piotrektt_ joined #gluster
17:32 andrewbogott But 'mount' thinks they are read/write, as does 'gluster volume info'
17:33 andrewbogott Where to begin?  I already tried restarting the volumes and remounting, to no avail
17:35 jclift andrewbogott: Does the stuff in the Gluster Basic Troubleshooting doc help? http://www.gluster.org/community/documentat​ion/index.php/Basic_Gluster_Troubleshooting
17:35 glusterbot <http://goo.gl/7m2Ln> (at www.gluster.org)
17:36 * andrewbogott reads
17:43 ramkrsna joined #gluster
17:43 ramkrsna joined #gluster
17:46 lpabon joined #gluster
17:48 primusinterpares joined #gluster
17:50 semiosis andrewbogott: sounds like you might be using quorum & had a server/brick die.  in any case, check your client log file & feel free to pastie it
17:51 semiosis maybe ,,(pasteinfo) and also gluster volume status
17:52 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:53 andrewbogott cluster.quorum-type: auto  <- does that mean quorum is on, or maybe on?
17:53 semiosis means quorum is enabled... a client will turn read-only if it can't see a majority of replicas
17:54 andrewbogott Ah, that would do it.
17:54 andrewbogott Hm… I guess maybe having this problem is better than having a split brain
17:55 timothy joined #gluster
17:59 Ryan_Lane joined #gluster
18:00 Ryan_Lane so, when quorum is failing, is there any way to know which brick the client can't connect to?
18:00 Ryan_Lane the log entry for this isn't terribly useful
18:00 Ryan_Lane gluster volume status on the bricks shows all bricks up
18:00 Ryan_Lane and no, I'm not using a firewall
18:03 semiosis log should say which brick connection was lost
18:03 semiosis iirc it even has host:port of the brick right in the log
18:05 Ryan_Lane on the client, or the host?
18:05 Ryan_Lane the client does not
18:06 lh joined #gluster
18:06 lh joined #gluster
18:07 semiosis did you turn your client log level up too high?  those are probably info or warn level messages
18:07 Ryan_Lane semiosis: the filesystem is reporting as read only
18:09 semiosis Ryan_Lane: unmount/remount client?
18:09 Ryan_Lane did that
18:09 Ryan_Lane stopped the volume
18:09 Ryan_Lane restarted it
18:09 semiosis and the log doesn't say which brick is unreachable?
18:09 Ryan_Lane killed the client
18:09 Ryan_Lane nope
18:09 semiosis thats weird
18:18 helloadam joined #gluster
18:20 piotrektt joined #gluster
18:20 theron joined #gluster
18:23 * Ryan_Lane sighs
18:24 Ryan_Lane seems stopping any volume makes glusterd inaccessible for any commands
18:25 Ryan_Lane I [input.c:46:cli_batch] 0-: Exiting with: -22
18:36 tqrst- left #gluster
18:38 jdarcy joined #gluster
18:39 disarone joined #gluster
18:43 Ryan_Lane and if I kill the glusterd process and start gluster, it now says my volume is already started
18:43 Ryan_Lane and that it's stopped
18:44 jdarcy It says that it's started *and* stopped?  Neat.
18:45 jdarcy My first response to that (and yes, I've seen it) is to use the "force" option on start.
18:45 Ryan_Lane yeah. I did that
18:45 jdarcy My second response is to manually kill everything gluster-related (killall -9 -r gluster) and then restart glusterd.
18:46 Ryan_Lane that's going to be really not-fun
18:46 Ryan_Lane because gluster also has an issue where when it starts it shits itself and doesn't start half of the volumes properly
18:46 jdarcy I suppose you could try removing the pid file (I think that's what it checks) but I've never tried that approach myself.
18:46 Ryan_Lane and I then have to force start all volumes
18:46 Ryan_Lane the processes are actually running
18:47 semiosis jdarcy: fyi Ryan_Lane has hundreds of volumes in his pool
18:47 semiosis iirc
18:47 Ryan_Lane yes
18:47 Ryan_Lane and I've been proactively reducing that number
18:47 Ryan_Lane which has stablized things somewhat
18:47 Ryan_Lane btw, it would be awesome to share subvolumes in gluster with share permissions
18:48 Ryan_Lane then I'd only need two volumes
18:48 jdarcy I must have missed what the original symptom was, then.  If all the processes are actually running, why are you trying to start the volume?
18:48 Ryan_Lane jdarcy: I was trying to stop the volume
18:48 Ryan_Lane then restart it
18:48 Ryan_Lane stop causes glusterd to become unresponsive to cli
18:48 jdarcy We are planning to get some of the multi-tenancy stuff from HekaFS in (finally!) for 3.5 or 3.6 ish.
18:49 Ryan_Lane but it continues to work for other things, according to strace
18:49 jdarcy That might allow you to have fewer volumes and fewer processes.
18:49 Ryan_Lane that would be very ideal
18:50 jdarcy If you wanted to work on a custom solution before then, I'd be glad (as the architect of HekaFS) to work with you on that.  Most of it's possible within the current infrastructure, it's just a few nagging bits around the edges that have kept it from being integrated so far.
18:50 nueces joined #gluster
18:51 jdarcy Do you have a stack trace for glusterd from when the stop makes it unresponsive?
18:51 Ryan_Lane it doesn't actually crash
18:51 Ryan_Lane the cli return code is -22
18:52 jdarcy -EINVAL?
18:52 Ryan_Lane strace shows that it continues checking socket files and connecting to the other daemons
18:52 Ryan_Lane it just stops working from the cli
18:54 jdarcy Does it get as far as accepting the new CLI connection (i.e. accept(2) completes, or socket_init if you're looking in gdb)?
18:55 Ryan_Lane well, it's an active filesystem, I really don't want to attach it in gdb
18:55 Ryan_Lane some volumes are working properly
18:55 Ryan_Lane some aren't
18:56 jdarcy Well, can you connect with strace and see if accept(2) returns?  This is glusterd, unlike glusterfs or glusterfsd it shouldn't affect I/O if if pauses for a second.
18:57 Ryan_Lane sure. I'm checking the fsd process as well right now, to try to debug the read-only issue as well
18:59 Ryan_Lane ah. found it
18:59 Ryan_Lane one of the four glusterfsd processes was hung
18:59 Ryan_Lane looks like a dealock
19:00 Ryan_Lane well, now I wish I would have attached that in gdb :(
19:04 jskinne__ joined #gluster
19:07 RobertLaptop joined #gluster
19:17 Ryan_Lane I don't need the space given by all 4 bricks. if I reduced my brick count from 4 to 2 for each volume, it would make my glusterd processes far less busy, right?
19:20 camel1cz left #gluster
19:23 Ryan_Lane hm. seems that every glusterfsd process are deadlocked on a single brick
19:33 jdarcy ?
19:35 lpabon joined #gluster
19:35 Ryan_Lane that's the reason for my quorum failure
19:36 Ryan_Lane and I should modify that to: seems every volume with a quorum failure is due to a deadlocked glusterfsd process on a single brick
20:00 jdarcy You mean a single server or filesystem containing multiple bricks?
20:01 jdarcy Or multiple volumes getting deadlocked on different bricks?
20:03 cw joined #gluster
20:13 jdarcy BRB.
20:13 jdarcy left #gluster
20:16 jdarcy joined #gluster
20:21 jdarcy joined #gluster
20:21 jdarcy_ joined #gluster
20:33 jskinner_ joined #gluster
20:33 jdarcy joined #gluster
20:36 ramkrsna joined #gluster
20:36 ramkrsna joined #gluster
21:01 sjoeboo joined #gluster
21:12 wushudoin joined #gluster
21:13 sjoeboo joined #gluster
21:13 wushudoin joined #gluster
21:13 wushudoin left #gluster
21:22 ramkrsna joined #gluster
21:22 msmith__ Running gluster 3.3.1 with XFS bricks.  Volume is setup 3x2 (2 copies).  volume is accessed via NFSv3 and is being used for dovecot mail storage.  I keep seeing various files go "Remote I/O error".  The gluster nfs.log is reporting "[2013-03-25 15:58:31.277100] E [dht-helper.c:652:dht_migr​ation_complete_check_task] 0-mail-dht: /mail/72/1000001650/index/mail​boxes/INBOX/dovecot.index.log: failed to get the 'linkto' xattr Permission den
21:22 msmith__ ied"
21:23 msmith__ ls in the directory shows the file with all ?.  -????????? ? ?                ?      ?            ? /mail/72/1000001650/index/mail​boxes/INBOX/dovecot.index.log
21:24 msmith__ gfid is the same on the bricks that actually hold the file.  What else should I look at, or how do I fix?
21:36 msmith__ i'll be afk, but logging any replies if anyone has suggestions
21:38 dendazen_ joined #gluster
22:25 manik joined #gluster
22:37 andreask joined #gluster
23:03 arusso joined #gluster
23:09 robos joined #gluster
23:29 ramkrsna joined #gluster
23:29 ramkrsna joined #gluster
23:35 yinyin joined #gluster
23:36 dendazen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary