Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 gildub joined #gluster
00:12 Peter3 why would i keep getting setttr error?
00:12 Peter3 E [posix.c:1177:posix_mknod] 0-sas02-posix: setting xattrs on
00:18 karmatest joined #gluster
00:18 karmatest test1 test2 test3 test4 test5 test6
00:19 karmatest test1test1test1test1test1test1test1test1test1test1test1test1test1test1test1test1test1test1 test2test2test2test2test2test2test2test2test2test2 test3test3test3test3test3test3test3test3test3test3test3test3 test4test4test4test4test4test4test4test4test4test4
00:19 purpleidea JoeJulian: does the bot remove karma for pasting in channel??
00:19 purpleidea JoeJulian: ++
00:19 glusterbot purpleidea: JoeJulian's karma is now 10
00:20 karmatest2 joined #gluster
00:21 karmatest2 purpleidea: ++
00:21 glusterbot karmatest2: purpleidea's karma is now 3
00:21 karmatest2 purpleidea: +=9
00:21 karmatest2 purpleidea: ++
00:21 glusterbot karmatest2: purpleidea's karma is now 4
00:21 karmatest2 purpleidea: +++
00:21 glusterbot karmatest2: +'s karma is now 1
00:21 glusterbot karmatest2: purpleidea's karma is now 5
00:21 purpleidea 20:22 < karmatest> test1 test2 test3 test4 test5 test6
00:21 purpleidea 20:22 < karmatest> test1test1test1test1test1test1test1test1test1test1test1test1test1test1test1test1test1test1  test2test2test2test2test2test2test2test2test2test2
00:22 purpleidea 20:24 <@glusterbot> karmatest2: purpleidea's karma is now 3
00:22 purpleidea 20:24 < karmatest2> purpleidea: +=9
00:22 purpleidea 20:24 < karmatest2> purpleidea: ++
00:22 purpleidea 20:24 <@glusterbot> karmatest2: purpleidea's karma is now 4
00:22 purpleidea 20:24 < karmatest2> purpleidea: +++
00:22 glusterbot purpleidea: Error: You're not allowed to adjust your own karma.
00:22 glusterbot purpleidea: +'s karma is now 2
00:22 glusterbot purpleidea: Error: You're not allowed to adjust your own karma.
00:22 purpleidea 20:25 <@glusterbot> karmatest2: +'s karma is now 1
00:22 purpleidea (sorry for the noise, nothing else was going on...)
00:27 _pol joined #gluster
00:28 _pol joined #gluster
00:52 tdasilva joined #gluster
00:54 zerick joined #gluster
00:57 _Bryan_ joined #gluster
01:23 hagarth joined #gluster
01:27 chirino joined #gluster
01:39 harish_ joined #gluster
02:15 elico joined #gluster
02:29 Peter1 joined #gluster
02:35 bharata-rao joined #gluster
03:24 haomaiwang joined #gluster
03:26 fengkun02 joined #gluster
03:30 fengkun02 i have a question, who can help me? i use gluster version 3.5.1,i create a cluster have four peers
03:31 fengkun02 but when i execute 'replace-brick' , after  my cluster  change to five peer
03:33 fengkun02 # gluster pool list
03:33 fengkun02 UUID                                    Hostname        State
03:33 fengkun02 8dea3b10-f785-4f67-9248-92962a0000b5    10.96.33.20     Connected
03:33 fengkun02 7e9dbe48-1377-4e9b-bd69-2c4fae458363    10.96.35.33     Connected
03:33 fengkun02 d78d2718-c4a8-40be-9871-5cab6f5d53fc    10.96.33.18     Connected
03:33 fengkun02 ec129821-66f6-429d-a247-394da3c77a5e    10.96.45.43     Connected
03:33 fengkun02 76688117-cf9d-4699-808a-c7852feb9f50    localhost       Connected
03:34 fengkun02 45.43 is not belong  to the cluster
03:34 fengkun02 # gluster peer  status
03:34 fengkun02 Number of Peers: 4
03:34 fengkun02 Hostname: 10.96.33.20
03:34 fengkun02 Uuid: 8dea3b10-f785-4f67-9248-92962a0000b5
03:34 fengkun02 State: Peer in Cluster (Connected)
03:34 fengkun02 Hostname: 10.96.35.33
03:35 fengkun02 Uuid: 7e9dbe48-1377-4e9b-bd69-2c4fae458363
03:35 fengkun02 State: Peer in Cluster (Connected)
03:35 fengkun02 Hostname: 10.96.33.18
03:35 fengkun02 Uuid: d78d2718-c4a8-40be-9871-5cab6f5d53fc
03:35 fengkun02 State: Peer in Cluster (Connected)
03:35 fengkun02 Hostname: 10.96.45.43
03:35 fengkun02 Uuid: ec129821-66f6-429d-a247-394da3c77a5e
03:35 fengkun02 State: Peer Rejected (Connected)
03:35 fengkun02 status is rejected
03:41 RameshN joined #gluster
03:45 Humble joined #gluster
03:48 shubhendu joined #gluster
03:52 atinmu joined #gluster
03:54 itisravi joined #gluster
04:01 saurabh joined #gluster
04:07 jiffe98 joined #gluster
04:11 nishanth joined #gluster
04:20 purpleidea joined #gluster
04:23 kanagaraj joined #gluster
04:25 kshlm joined #gluster
04:31 Rafi_kc joined #gluster
04:32 _pol joined #gluster
04:32 anoopcs joined #gluster
04:37 jiffin joined #gluster
04:38 Peter4 joined #gluster
04:41 anoopcs joined #gluster
04:43 RameshN joined #gluster
04:43 anoopcs joined #gluster
04:56 ramteid joined #gluster
04:57 ramteid joined #gluster
04:58 Humble joined #gluster
05:08 spandit joined #gluster
05:09 meghanam joined #gluster
05:09 meghanam t
05:10 cjhanks Yeah, t!
05:23 kdhananjay joined #gluster
05:24 aravindavk joined #gluster
05:32 DV joined #gluster
05:40 ppai joined #gluster
05:42 rastar joined #gluster
05:53 shylesh__ joined #gluster
06:00 psharma joined #gluster
06:17 LebedevRI joined #gluster
06:21 sputnik13 joined #gluster
06:32 Peter1 joined #gluster
06:37 cultavix joined #gluster
06:44 ctria joined #gluster
06:49 raghu joined #gluster
06:57 vpshastry joined #gluster
07:05 ekuric joined #gluster
07:05 lalatenduM joined #gluster
07:05 vu joined #gluster
07:06 lalatenduM hchiramm, pm
07:06 hchiramm ok
07:12 dusmant joined #gluster
07:16 keytab joined #gluster
07:20 Thilam joined #gluster
07:22 sputnik13 joined #gluster
07:23 anoopcs joined #gluster
07:26 ppai joined #gluster
07:27 nightlydev joined #gluster
07:29 cultavix joined #gluster
07:31 bharata_ joined #gluster
07:35 Philambdo joined #gluster
07:45 glusterbot New news from resolvedglusterbugs: [Bug 1119209] [RFE] cli command to display volume options <https://bugzilla.redhat.com/show_bug.cgi?id=1119209>
07:51 haomaiwa_ joined #gluster
07:55 Nightlydev joined #gluster
07:57 sputnik13 joined #gluster
08:15 glusterbot New news from resolvedglusterbugs: [Bug 1084964] SMB: CIFS mount fails with the latest glusterfs rpm's <https://bugzilla.redhat.com/show_bug.cgi?id=1084964>
08:19 ppai joined #gluster
08:21 ricky-ti1 joined #gluster
08:26 haomaiw__ joined #gluster
08:29 anoopcs joined #gluster
08:30 anoopcs joined #gluster
08:34 R0ok_ joined #gluster
08:38 Slashman joined #gluster
08:42 social joined #gluster
08:47 dusmant joined #gluster
08:48 lalatenduM joined #gluster
08:52 hchiramm joined #gluster
08:53 basso joined #gluster
08:53 basso is gluster 3.4.x the long term stable version?
08:54 andreask joined #gluster
08:57 JoeJulian basso: That's what the community is currently recommending. The developers say 3.5 is stable. None of them are long-term. Backports will happen when asked and if the dev can do so.
08:57 ppai joined #gluster
08:57 glusterbot New news from newglusterbugs: [Bug 1123289] crash on fsync <https://bugzilla.redhat.com/show_bug.cgi?id=1123289>
08:59 vimal joined #gluster
08:59 NuxRo JoeJulian: but isn't the version in RHEL7 supposed to stay at 3.4.0?
09:00 JoeJulian NuxRo: no idea. I'm not a rhel subscriber.
09:01 basso JoeJulian: Ah okay, will use the 3.4 release then. It will be used for a simple HA samba service, so I only need it to be stable.
09:02 NuxRo JoeJulian: me neither, but I'm centos subscriber
09:02 NuxRo :)
09:03 JoeJulian but you're not running Red Hat Storage
09:05 JoeJulian I know kkeithley said 3.4 would hit eol "soon" but until I stop seeing new bugs that I consider critical in 3.5, I'm not going to recommend it.
09:05 JoeJulian I'm going to have a hard time waiting to pull the trigger on 3.6 though.
09:10 tg2 joined #gluster
09:21 liquidat joined #gluster
09:22 NuxRo JoeJulian: re 3.6, explain
09:23 haomaiwa_ joined #gluster
09:27 glusterbot New news from newglusterbugs: [Bug 1123294] [FEAT] : provide an option to set glusterd log levels other than command line flag <https://bugzilla.redhat.com/show_bug.cgi?id=1123294>
09:27 meghanam joined #gluster
09:33 qdk joined #gluster
09:33 haomai___ joined #gluster
09:39 spandit joined #gluster
09:40 bala joined #gluster
09:51 calum_ joined #gluster
09:58 haomaiwa_ joined #gluster
10:00 StarBeast joined #gluster
10:05 dusmant joined #gluster
10:07 cyberbootje joined #gluster
10:10 hchiramm joined #gluster
10:30 qdk joined #gluster
10:49 meghanam joined #gluster
10:59 gildub joined #gluster
11:06 samsaffron joined #gluster
11:07 neoice joined #gluster
11:10 fubada joined #gluster
11:20 vpshastry1 joined #gluster
11:21 ricky-ticky joined #gluster
11:26 andreask joined #gluster
11:34 vpshastry joined #gluster
11:38 StarBeast joined #gluster
11:40 aravindavk joined #gluster
11:42 dusmant joined #gluster
11:45 ccha2 is there a way to change logs timestamps to use system UTC ?
11:45 smokingisbadmkay joined #gluster
11:46 ccha2 with the timezone
12:04 ctria joined #gluster
12:14 SpComb using qemu/libvirt with glusterfs seems to require glusterd `option rpc-auth-allow-insecure on` and glusterfsd `server.allow-insecure on`
12:14 SpComb what are the implications of setting those?
12:14 SpComb I presume for volume it should be okay if you use the ACL to limit access to your virt hosts, and then assume that nothing evil could possibly ever run on those
12:16 glusterbot New news from resolvedglusterbugs: [Bug 1110456] GlusterFS 3.4.5 Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1110456> || [Bug 854541] Need better practise on (not) editing /usr/share/java/conf/core-site.xml <https://bugzilla.redhat.com/show_bug.cgi?id=854541>
12:17 bala joined #gluster
12:18 dusmant joined #gluster
12:20 diegows joined #gluster
12:24 edward1 joined #gluster
12:25 RameshN joined #gluster
12:26 mojibake joined #gluster
12:29 SmithyUK joined #gluster
12:30 B21956 joined #gluster
12:37 hagarth joined #gluster
12:38 bala joined #gluster
12:44 vshankar joined #gluster
12:46 rjoseph joined #gluster
12:46 smokingisbadmkay so i'm doing a little benchmarking on a gluster volume and I get some unexpected results; the volume consists of 2 bricks spread accross 2 servers.
12:47 smokingisbadmkay I'm measuring up to 14 GB/sec for random read speeds, which is way too high for both disk and link speed
12:48 smokingisbadmkay which is why I was wondering if the clients kept caches of their own. but according to the gluster FAQ, the clients keep no caches
12:49 smokingisbadmkay can anyone think of any reasons as for why this number is so high?
12:49 anoopcs left #gluster
12:52 SpComb how can I move a specific glusterfs volume from one network two a different one?
12:52 SpComb say I have a handful of glusterfs servers, which are on two different networks
12:53 SpComb I want to have one volume on one of those networks, and a different volume on a different network
12:53 SmithyUK http://gluster.org/community/documentation/index.php/Gluster_3.2:_Migrating_Volumes
12:53 glusterbot Title: Gluster 3.2: Migrating Volumes - GlusterDocumentation (at gluster.org)
12:57 SpComb SmithyUK: the servers are exactly the same, they're just each on two different networks
13:02 SpComb if I just try and `volume create ...` using the alternative network's address as the brick host, I get `Host ... is not in 'Peer in Cluster' state`
13:03 SpComb if I try and `peer probe foohost.cluster` I just get `peer probe: success: host foohost.cluster port 24007 already in peer list`
13:05 kkeithley1 joined #gluster
13:05 jiku joined #gluster
13:07 bennyturns joined #gluster
13:08 SmithyUK So you have 2 network interfaces on the same server and you want the gluster volume to be accessible on both?
13:09 SpComb I want specific volumes to be available on different ones
13:10 SpComb I have an existing slow network with all the clients connected, and then a new faster network with a subset of clients connected
13:19 tdasilva joined #gluster
13:19 dcope joined #gluster
13:26 theron joined #gluster
13:27 dcope anyone know of a credible dedicated server provider with 10gbps & private networking for gluster?
13:28 DV__ joined #gluster
13:28 FooBar pfff... that would be challeging... maybe if you go with some co-located setup ;)
13:28 dusmant joined #gluster
13:29 dcope :(
13:29 dcope leasweb's racks only have 100mbps
13:29 FooBar most isp's only have 100M or 1-gig to the racks
13:30 dcope seems like it would be easy to saturate 1gbps with gluster :P
13:30 FooBar just get your own switch... connect the systems with quad-gigabit or dual-10-gig
13:31 FooBar i still have my gluster boxes on single 1-gig... and the network isn't the bottleneck
13:31 FooBar IOPS is :(
13:31 dcope hm
13:31 FooBar many small files... average filesize < 4k
13:34 dcope bandwidth graph for the past month http://cl.ly/image/3Z390W2F0V47
13:34 glusterbot Title: graph.png (at cl.ly)
13:34 dcope i need 10g :(
13:36 andreask joined #gluster
13:36 FooBar http://i.sigio.nl/251056d5de9da9782276120bc64a29e4.png ... not really here ;)
13:36 dcope nice :D
13:37 FooBar 3 gluster boxes, triple-replicated... but all 3 graphs look a lot alike
13:37 dcope is using gluster over the WAN a bad idea?
13:37 dcope :P
13:37 FooBar yeah ;)
13:37 dcope hehe
13:39 dcope hm softlayer has 10gbps public & private
13:40 FooBar nice
13:42 plarsen joined #gluster
13:42 theron joined #gluster
13:45 theron_ joined #gluster
13:50 rjoseph left #gluster
13:53 giannello joined #gluster
13:55 Thilam joined #gluster
14:03 recidive joined #gluster
14:07 andreask joined #gluster
14:08 mortuar joined #gluster
14:13 wushudoin joined #gluster
14:20 julim joined #gluster
14:23 dcope joined #gluster
14:23 vpshastry joined #gluster
14:35 vpshastry joined #gluster
14:36 sjm joined #gluster
14:38 overclk joined #gluster
14:38 burn420 joined #gluster
14:40 sjm left #gluster
14:42 xleo joined #gluster
14:48 jobewan joined #gluster
14:48 bgupta_ joined #gluster
14:50 bgupta_ Docs recomend that seperate Disk/volumes are used for Backing store. What's the reason for this, as we're kinda hoping we can just use a directory on the root volume?
14:57 ndk joined #gluster
14:59 R0ok_ joined #gluster
15:02 theron joined #gluster
15:03 dcope joined #gluster
15:11 lalatenduM joined #gluster
15:15 daMaestro joined #gluster
15:18 anoopcs joined #gluster
15:25 lmickh joined #gluster
15:29 jiku joined #gluster
15:30 tg2 joined #gluster
15:34 mortuar joined #gluster
15:36 harish_ joined #gluster
15:43 xleo joined #gluster
15:43 brettnem joined #gluster
15:43 brettnem hey all
15:43 dtrainor_ joined #gluster
15:44 brettnem I was trying to figure out how to bind gluster to a specific network interface. How do I do this?
15:44 brettnem oh never mind, I just found: option transport.socket.bind-address
15:48 chirino joined #gluster
15:50 sjm joined #gluster
15:52 Pupeno_ joined #gluster
15:54 dtrainor_ joined #gluster
16:05 Nik777 joined #gluster
16:06 Nik777 Hi All - could anyone point me to resources somewhere that explain 2 points:
16:06 Nik777 Why GLusterFS requires a 64-bit CPU
16:06 Nik777 and why Glusterfs requires a min of 1GB RAM?
16:07 Nik777 I'm looking for a relatively lightweight distributed FS, and GLusterFS looked like a good fit.
16:07 vikumar joined #gluster
16:08 kkeithley_ Where do you see those? Don't confuse RHS/RHSS with GlusterFS. The community has 32-bit RPMs and dpkgs available for a variety of platforms, some of which, e.g. ARM, you won't find hardware that has more than 256MB or 512MB of RAM.
16:08 Nik777 However, the resource requirements make me wonder whether GlusterFS places inherent loads on the nodes?
16:08 dcope joined #gluster
16:09 Nik777 I saw the hardware requirements on the GlusterFS website, in the "QuickStart" section
16:09 rotbeard joined #gluster
16:09 Peter1 joined #gluster
16:10 Peter1 morning! Gluster NFS issue today
16:10 Peter1 anyone using NFS on Gluster?
16:10 kkeithley_ Hmmm. Those are reasonable guidelines. They're not hard requirements AFAIK or AFAIC
16:10 Nik777 ... Actually, I was looking at running the distributed FS on ARM systems
16:10 Peter1 I have random NFS client hang
16:11 kkeithley_ There are people running community GlusterFS on RaspberryPi
16:11 jiffe98 if I delete a volume should it delete the associated directories on the gluster servers?
16:11 kkeithley_ jiffe98: no
16:12 vikumar joined #gluster
16:13 jiffe98 kkeithley_: so if I want to create a volume on the same path I should delete those directories manually?
16:13 Nik777 Ok - thank you! I will continue with GlusterFS on my ARM nodes, and see how it runs :)
16:13 kkeithley_ jiffe98: if there's nothing in those dirs that you care about
16:18 Nik777 kkiethley_: Thank you very much for your quick and simple responses :)
16:19 kkeithley_ yw
16:21 vimal joined #gluster
16:32 JoeJulian whoah... pranith told me last night the client doesn't complete a write until it's complete on all replicas. That should be a performance option to return on first complete and background the rest.
16:33 dtrainor_ joined #gluster
16:52 brettnem_ joined #gluster
16:52 tdasilva joined #gluster
16:54 julim joined #gluster
16:57 dcope are there any other tools like gluster that load balance a couple file servers? i'm not going to be able to get my servers on the same private network so i can't use gluster.
16:59 gehaxelt dcope, nfs ?
16:59 JoeJulian rsync + a load balancer?
16:59 dcope JoeJulian: yes
16:59 dcope but not one that works on the http level like haproxy
16:59 dcope gehaxelt: nah, probably ext
16:59 dcope 3
16:59 gehaxelt hmm
17:00 JoeJulian You can load balance more than http with an F5.
17:00 Lee- nfs+inotify+rsync is a simple linux based solution, but has its own limitations
17:01 dcope hm i guess i could also just make my own simple round robin balancer
17:01 dcope that sits in front
17:02 Lee- file level replication is something I've tried different solutions for over the years. everything from generic NFS with failovers, unisyn, rsync, even some hacky type methods with sym links
17:02 dcope what were the rsync limitations?
17:02 dcope seems like a cron job to rsync the directories would suffice?
17:02 Lee- time delay
17:02 dcope oh
17:03 Lee- like for a website that has a user generated content -- say a user creates a blog with an image. their page request after the image is uploaded may not have the file. This can be solved at the application level by effectively adding a file not found check and requesting from the other server
17:03 dcope yep
17:03 Lee- the best solutions ive come up with all involve some level of application support to hanlde missing files or synchronization. what attracts me to glusterfs is that it is a more generalized solution
17:04 dcope yeah gluster seems great
17:04 dcope it would work perfectly for what i need but i just can't get dedicated machines on the same private network
17:04 Lee- inotify is a nifty function in linux (idk if other kernels have it). inotify allows you to run scripts whenever a file system change occurs. So you can kick off the rsync script immediately after the file is written. This reduces latency on sync between the file servers, but the latency is still there
17:05 Lee- If it's a 100MB file, then even with gigabit between the file servers, there can still be a couple seconds delay which may or may not be OK depending on the use case
17:05 Lee- another option if write performance is not super critical is setting up a point to point VPN between your file servers
17:05 dcope yeah i'm not worried about write performance at all
17:06 dcope i just want fast reads
17:06 Lee- probably glusterfs+VPN is going to be one of the better options if application level changes are not possible
17:06 dcope how would a vpn help?
17:07 Lee- VPN effectively makes the servers appear to be on a shared LAN
17:07 dtrainor_ joined #gluster
17:07 dcope hm
17:07 dcope wonder if the network speeds would be OK
17:07 dcope they'll be in the same data center... just not on the same network :(
17:07 Lee- So you'd have a virtualized network interface, like instead of eth0 or em0, you'd have say tun0 which handles your storage network
17:07 kkeithley1 joined #gluster
17:08 siel joined #gluster
17:08 Lee- without knowing anything about the network in this data center, idk if it would be possible, but you may want to ask about a VLAN for this
17:10 Lee- another option, although much more expensive, would be a cross connect between the cabinets. damn near any data center will do that, but they may charge a lot for it
17:10 juhaj Slightly off-topic, but I'd appreciate some help: how do I find out where a socket leads? Namely, /proc/<pid>/fd says 7 -> socket:[869058388] and I would need to know what's at the other end of that socket. It has suddenly stopped providing data!
17:11 Lee- so vlan would be cheap if they can do it, cross connect would work and have high performance, but many data centers are expensive for cross connects, next would be VPN, but it has encryption overhead
17:11 dcope Lee-: yeah they're trying to sell me a rack right now
17:11 dcope but it's not 10 Gbps :<
17:12 Peter1 any gluster client for RHEL 4?
17:12 Peter1 or Centos 5.5 ?
17:13 bennyturns Peter1, no on el4, and RHEL/econtos in 5.6 and later
17:13 bennyturns Peter1, how did things work out yesterday?
17:14 Peter1 on the quota side, the disablly is super slow as tons of files
17:14 Peter1 i ends up recreated the filesystem
17:14 Peter1 the stopping on the volume was quick
17:14 bennyturns ahh k
17:14 Peter1 after disbaled quota
17:14 Peter1 and we still getting some quota missmatch
17:14 Peter1 do u run NFS over gluster?
17:15 Lee- juhaj, install sockstat
17:15 bennyturns I test everything
17:15 Peter1 i m having NFS client issue w/ gluster export now
17:15 bennyturns Peter1, like what?
17:15 Peter1 some NFS client just hang on the mount
17:15 Peter1 when have load
17:16 bennyturns you are seeing some crazy stuff, I run performance regression tests daily that load things pretty heavily
17:16 bennyturns I havent seen any hangs :(
17:16 bennyturns Peter1, what sort of workload?
17:16 Peter1 ya what is ur NFS mount options?
17:16 bennyturns just vers=3
17:16 bennyturns nothing specific
17:16 Peter1 workload seems pretty like just regulater read write
17:16 Peter1 o
17:17 bennyturns small files?  rally large writes?
17:18 bennyturns there was a problem in older versions where really large nfs writes would slow down and stop, that that is fixed
17:18 Peter1 rw right?
17:19 bennyturns Peter1, I run several benchmarks, seq rw, random rw for large files, and every file op for smallfiles
17:19 bennyturns ya rw
17:19 Peter1 ok let me try
17:20 Peter1 cuz i used to specify the rsize and wsize
17:20 Peter1 and also hard,into
17:27 systemonkey joined #gluster
17:32 Peter1 ya i changed to vers=3 only and still hanging
17:32 Peter1 any suggestion to troubleshoot NFS client hang with gluster?
17:34 ramteid joined #gluster
17:34 Peter1 bennyturns: this is how i mount it now glusternfsvipprod006.shopzilla.laxhq:/sas01/OraclusterBackupSata02 on /dwh_data_transfer1 type nfs (rw,nfsvers=3,proto=tcp,addr=10.40.12.80)
17:35 Peter1 and it's hanging :(
17:35 bennyturns Peter1, when you say you re created the FS, did you mkfs.xfs on your bricks or just recreate the volume?
17:36 Peter1 both
17:36 Peter1 mkfs
17:36 bennyturns kk
17:36 bennyturns Peter1, what are you running to create your data?  DD?
17:37 Peter1 perl scripts file open and write
17:38 Peter1 writing small text files
17:38 Peter1 then it hang....:(
17:40 bennyturns Peter1, can you repro the hang outside your scripts with DD in a loop or something?
17:41 Peter1 what kind of dd you runs?
17:42 bennyturns Peter1, something similar tyo waht your scripts are doing ?
17:42 Peter1 ok
17:42 bennyturns dd if=/dev/zero of=$gluster-mount bs=1M count=1
17:42 bennyturns how big are the files?  really small I assume?
17:42 Peter1 yes
17:42 Peter1 like 30k
17:43 chirino joined #gluster
17:43 bennyturns dd if=/dev/zero of=$gluster-mount bs=30k count=1
17:43 dcope joined #gluster
17:43 anoopcs joined #gluster
17:44 bennyturns if you see a hang strace the PID to see if its really hung or just slow
17:45 Peter1 it runs real fast on other client
17:46 Peter1 but that particular hang and keep hangng
17:47 bennyturns Peter1, are they both mounting the same node?
17:47 Peter1 yes!
17:48 bennyturns be sure to check gluster v status and make sure the NFS server is online, maybe it crashed or oomed or something?
17:48 Peter1 yes i did check and it's up
17:48 bennyturns also /var/log/glusterfs/nfs.log
17:49 Peter1 let me check the log
17:49 bennyturns anything in the logs?
17:49 bennyturns also /var/log/messages on the client
17:49 Peter1 i have system grepping \sE\s and nothing pops up
17:49 bennyturns just tail it, any warnings or anything?
17:49 Peter1 Jul 25 09:25:55 jobserverdev001 kernel: nfs: server glusternfsvipprod003.shopzilla.laxhq OK
17:49 Peter1 Jul 25 09:25:56 jobserverdev001 last message repeated 14 times
17:49 Peter1 Jul 25 10:01:48 jobserverdev001 kernel: nfs: server glusternfsvipprod003.shopzilla.laxhq not responding, still trying
17:49 Peter1 Jul 25 10:13:53 jobserverdev001 kernel: nfs: server glusternfsvipprod003.shopzilla.laxhq not responding, still trying
17:49 Peter1 Jul 25 10:14:54 jobserverdev001 kernel: nfs: server glusternfsvipprod003.shopzilla.laxhq not responding, still trying
17:49 Peter1 Jul 25 10:16:23 jobserverdev001 last message repeated 2 times
17:49 Peter1 Jul 25 10:24:52 jobserverdev001 kernel: nfs: server glusternfsvipprod003.shopzilla.laxhq OK
17:49 bennyturns I normally gream for M A C E
17:49 bennyturns I normally grep for M A C E
17:50 bennyturns it cant connect to the server
17:50 bennyturns or it looks to be flapping
17:50 glusterbot bennyturns: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
17:50 JoeJulian I thought I fixed that regex...
17:50 Peter1 lol
17:50 bennyturns :)
17:50 Peter1 ya looks like flapping....
17:51 bennyturns any connectivity issues?  can you ping a bunch of times and see if there is a time out?
17:51 bennyturns what about running iperf between the server and the client to see if there are any problems sending data over the wire
17:51 Peter1 let me do it
17:52 Peter1 how do u use iperf?
17:52 JoeJulian Interesting find today. Regardless of how fast my storage is, writes to a replica 2 volume are capping out at 1337Mbit. I wonder if that's due to context switching and the memory scheduler.
17:53 bennyturns Peter1, its pretty easy, iperf -s for server
17:54 bennyturns and iperf -c  for client
17:54 bennyturns start the server and connect in from the client
17:54 Peter1 ok
17:54 Peter1 ---> root@jobserverdev001.shopzilla.laxhq (0.00)# iperf -c glusternfsvipprod003.shopzilla.laxhq
17:54 Peter1 ------------------------------------------------------------
17:54 Peter1 Client connecting to glusternfsvipprod003.shopzilla.laxhq, TCP port 5001
17:54 Peter1 TCP window size: 16.0 KByte (default)
17:54 Peter1 ------------------------------------------------------------
17:54 Peter1 [  3] local 10.40.5.53 port 43026 connected with 10.40.12.75 port 5001
17:54 glusterbot Peter1: -'s karma is now -1
17:54 Peter1 [ ID] Interval       Transfer     Bandwidth
17:54 Peter1 [  3]  0.0-10.0 sec  1.10 GBytes    943 Mbits/sec
17:54 glusterbot Peter1: ----------------------------------------------------------'s karma is now -1
17:54 glusterbot Peter1: ----------------------------------------------------------'s karma is now -2
17:54 Peter1 o man..
17:54 Peter1 sorry
17:55 bennyturns Peter1, are you seeing dropped packets or anything?
17:55 Peter1 http://pastie.org/9420666
17:55 glusterbot Title: #9420666 - Pastie (at pastie.org)
17:56 bala joined #gluster
17:56 Peter1 netstat -i seems clean
17:56 bennyturns Peter1, I have a meeting in 5 I gotta run for a bit
17:57 bennyturns bbiab
17:57 Peter1 ok
17:57 Peter1 thanks!
18:14 Peter1 bennyturns: getting kernel: nfs_statfs: statfs error = 512
18:15 Peter1 and seems to me that some nfs client hang against Glsuter
18:15 Peter1 wonder if older os need some tunings to prevent hanging?
18:16 JoeJulian Oooh, look at that... it's pretty much what I expected, but a non-journaled filesystem (ext2) writes twice as fast on a qemu image hosted on glusterfs.
18:19 zerick joined #gluster
18:35 nueces joined #gluster
18:48 georgem2 joined #gluster
18:51 georgem2 Is it possible to mount a Gluster volume that sits behind a NAT? My client is at 172.16.31.20 and the Gluster volume is presented by 10.0.5.20 and 10.0.5.22 (replica 2) NAT-ed as 172.16.31.149 and 172.16.31.151; I get "Transport endpoint is not connected" errors
18:52 JoeJulian georgem2: no
18:52 JoeJulian @mount server
18:52 glusterbot JoeJulian: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
18:52 JoeJulian That said, if you understand how it connects I think it might theoretically be possible.
18:53 georgem2 JoeJulian:so the client volume definition contains the real IP of the Gluster nodes?
18:54 JoeJulian Depends on how you defined your volume.
18:54 JoeJulian We always recommend by hostname.
18:54 georgem2 I used the IP but I can delete and start fresh using hostnames, thanks
18:55 JoeJulian Blog about how it turns out, either way, and let us know where you did so, if you don't mind. It would be interesting to see.
18:58 Pupeno joined #gluster
19:02 Pupeno_ joined #gluster
19:04 Pupeno joined #gluster
19:04 dcope joined #gluster
19:05 theron joined #gluster
19:09 theron joined #gluster
19:26 theron_ joined #gluster
19:28 georgem2 JoeJulian:I'll do, but first I have to get it working :)
19:29 georgem2 JoeJulian:thanks for hints
19:29 JoeJulian good luck
19:29 glusterbot New news from newglusterbugs: [Bug 1123475] Cannot retrieve clients from non-participating server <https://bugzilla.redhat.com/show_bug.cgi?id=1123475>
19:30 bala joined #gluster
19:33 JoeJulian semiosis: You back in here yet?
19:34 semiosis i am!
19:34 semiosis o/
19:34 JoeJulian 3.4.5 is out in case you missed the announcement.
19:34 semiosis i did miss that
19:34 semiosis thx
19:35 JoeJulian And I've been nominated to run for chairman of the gluster board...
19:35 JoeJulian Wanna throw your hat in the running?
19:38 * JoeJulian listens to the crickets...
19:41 semiosis pm
19:54 dcope joined #gluster
19:57 B21956 joined #gluster
20:05 julim joined #gluster
20:13 * cicero votes for semiosis, chairman 2014!!!!
20:18 Peter1 bennyturns: seems like i found how it hang the nfs mount
20:18 Peter1 when i do an find . -ls
20:19 nueces joined #gluster
20:20 Peter1 http://pastie.org/9420920
20:20 glusterbot Title: #9420920 - Pastie (at pastie.org)
20:20 Peter1 nfs log shows disconnect....
20:45 Peter1 and this particular export....
20:47 _dist joined #gluster
20:47 glusterbot New news from resolvedglusterbugs: [Bug 765311] new feature request: rdma bonding in glusterfs ib-verbs <https://bugzilla.redhat.com/show_bug.cgi?id=765311>
20:48 Peter1 E [rpcsvc.c:533:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
20:50 chirino joined #gluster
20:58 theron joined #gluster
21:09 Peter1 any gluster client on RHEL 5?
21:17 asku joined #gluster
21:23 theron joined #gluster
21:25 mojibake joined #gluster
21:27 asku joined #gluster
21:28 dcope joined #gluster
21:29 glusterbot New news from newglusterbugs: [Bug 1120815] df reports incorrect space available and used. <https://bugzilla.redhat.com/show_bug.cgi?id=1120815>
21:33 Peter1 bennyturns: in a folder with 10k small files
21:33 Peter1 the process was trying to do a ls against and it hang over nfs
21:33 bennyturns Peter1, /me looks
21:34 Peter1 but i did tried to browse more files over nfs
21:34 bennyturns Peter1, so oyu have a dir with 10k file and run find . -ls ?
21:34 Peter1 i wonder if any corruption happening on volume
21:34 Peter1 yes
21:34 Peter1 actually just ls -l
21:34 bennyturns and it hangs the nfs mount?
21:34 * bennyturns will try now
21:34 Peter1 THANKS!!!
21:35 bennyturns Peter1, FYI this is what I use to create  bunchof of files https://github.com/bengland2/smallfile
21:35 glusterbot Title: bengland2/smallfile · GitHub (at github.com)
21:36 Peter1 ic
21:37 bennyturns creating files now
21:38 Peter1 cool
21:38 Peter1 i m on ubuntu 12.04 xfs 3.5.1
21:38 Peter1 i share nfs export with quota in directories
21:38 Peter1 replica 2 volume
21:41 bennyturns k I am on RHEL, dont have ubuntu handy :(
21:42 bennyturns Peter1, how many bricks?
21:42 Peter1 6
21:42 Peter1 3x2
21:42 bennyturns 3x2?
21:42 bennyturns kk
21:42 Peter1 the problem is this volume been working for last 3wks
21:42 Peter1 and started having issue this morning
21:43 Peter1 i wonder if any memory leak on gluster that preventing the volume to work
21:44 bennyturns Peter1, ohh
21:44 bennyturns iirc there was one in quota?
21:44 Peter1 ??
21:44 * bennyturns googles
21:44 Peter1 yes the volume has quota on
21:45 Peter1 i do not think that's anything to performance
21:45 Peter1 it's only 22G data
21:45 Peter1 and not much iops
21:46 Peter1 it's functionality that gluster seems degrading over time when an export is mounting and can't get file access anymore
21:46 Peter1 eventually
21:48 bennyturns Peter1, this is the one I am thinking of https://bugzilla.redhat.com/show_bug.cgi?id=1108324
21:48 glusterbot Bug 1108324: is not accessible.
21:49 bennyturns Peter1, lemme see where that is fixed, I doubt its in 3.5.1
21:50 Peter1 hmm i cannot see...
21:52 Peter1 what is the bug about?
21:54 bennyturns Peter1, I am looking for the upstream one, found:
21:54 bennyturns https://bugzilla.redhat.com/show_bug.cgi?id=1110777
21:54 glusterbot Bug 1110777: unspecified, unspecified, ---, vshastry, CLOSED CURRENTRELEASE, glusterfsd OOM - using all memory when quota is enabled
21:54 bennyturns it was fixed in 3.5.1 though :(
21:55 Peter1 maybe not on ubuntu?
21:56 bennyturns Peter1, if somethign is getting oom killed you should see in messages
21:56 bennyturns Peter1, grep for OOM on your servers real quick
21:56 Peter1 which log?
21:57 bennyturns or maybe Out of memory:
21:57 bennyturns /var/log/messages
21:57 bennyturns here is what it will look like:
21:57 bennyturns http://gluster.org/pipermail/gluster-users/2014-March/039591.html
21:57 glusterbot Title: [Gluster-users] glusterfs/nfs OOM killed (at gluster.org)
21:58 bennyturns Peter1, is your quota on / ?
21:59 Peter1 yes
21:59 bennyturns k
21:59 Peter1 hmm / of the volume?
22:00 bennyturns Peter1, ya I just ran  gluster v quota testvol limit-usage / 100GB
22:02 Peter1 quota is not on for /
22:03 bennyturns np changing it
22:05 bennyturns Peter1, see any oom messages in var log messages?
22:06 bennyturns Peter1, as far as I can tell this is the fix for the quota memory leak http://review.gluster.org/#/c/8132/2
22:06 glusterbot Title: Gerrit Code Review (at review.gluster.org)
22:06 bennyturns and it was merged into 3.5
22:06 Peter1 u mean i should set quota for / ?
22:07 qdk joined #gluster
22:07 bennyturns Status: MODIFIED → CLOSED
22:07 bennyturns Fixed In Version: glusterfs-3.5.1
22:07 Peter1 hmm
22:07 bennyturns Peter1, no, I am trying to reproduce your issue on my cluster, I was jsut asking what dir your quota was enabled on so I could set it the same way
22:08 Peter1 icic
22:08 Peter1 thanks!!!
22:08 Peter1 i can mount and ls over gluster client no issue
22:08 Peter1 w/ quota and replica 2
22:08 Peter1 but nfs seems brings so many issue
22:10 _Bryan_ joined #gluster
22:10 bennyturns Peter1, did you grep you messages file on your server nodes for ooms?  I am just curious if there are any
22:10 Peter1 doing it now
22:11 Peter1 no oom message
22:11 Peter1 i did a "grep oom messages"
22:11 mojibake Getting into Gluster so still learning. If I create a replica on two nodes "gluster create volume gfs-shared replica 2 server1:/export/brick1 server2:/export/brick1" I will have a replicated volume. and if I need to add space i can do "gluster add volume gfs-shared replica 2 server1:/export/brick2 server2:/export/brick2"  But how do I go about adding anouther node or two in the mix if I want more availablity and throughput for the clients. and
22:12 Peter1 i did both grep -i oom messages and grep -i "out of memory" messages
22:12 Peter1 nothing returned
22:15 bennyturns Peter1, k prolly not an oom then :(
22:15 Peter1 hmm
22:15 Peter1 wonder if nfs client settings need to change
22:16 Peter1 there were NO error on gluster host
22:16 Peter1 but the ls over nfs just hang clients :(
22:16 bennyturns I am creating the files again but its working for me
22:17 bennyturns I am not on 3.5.1, I owuld need to setup an env for that
22:17 Peter1 i appreciate ur help!
22:17 Peter1 E [rpcsvc.c:533:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
22:17 bennyturns mojibake, to add more nodes you just run gluster peer probe $HOSTNAME
22:18 Peter1 this is one rpc error i got
22:18 bennyturns hrm ,me googles
22:18 Peter1 and i m looking at this
22:18 Peter1 https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/ch21s02.html
22:18 glusterbot Title: 21.2. Troubleshooting File Locks (at access.redhat.com)
22:18 JoeJulian mojibake: gluster volume add-brick gfs-shared server3:/export/brick1 server4:/export/brick1
22:18 JoeJulian @lucky expanding a replicated volume by one server
22:18 glusterbot JoeJulian: http://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
22:20 Peter1 also this
22:20 Peter1 https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/ch09s03s02.html
22:20 glusterbot Title: 9.3.2. Troubleshooting NFS (at access.redhat.com)
22:20 bennyturns Peter1, could it be as simple as noacl?
22:20 bennyturns https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/ch09s03s02.html#idp10738904
22:20 glusterbot Title: 9.3.2. Troubleshooting NFS (at access.redhat.com)
22:21 Peter1 yes i added that too
22:21 Peter1 but alsohang
22:21 Peter1 still hang on ls
22:21 bennyturns you have  -o vers=3,noacl ?
22:22 Peter1 yes
22:22 bennyturns doh
22:22 Peter1 glusternfsvipprod005.shopzilla.laxhq:/sas01/OraclusterBackupSata02 on /gfs/OraclusterBackupSata02 type nfs (rw,vers=3,noacl,addr=10.40.12.79)
22:22 bennyturns backto the drawing board :P
22:23 Peter1 i wish i could be a normal gluster usef
22:23 Peter1 user
22:23 Peter1 how can i trace my ls hang?
22:24 bennyturns strace -p $(pidof ls)
22:24 bennyturns can you pastebin as well?
22:24 Peter1 yes
22:25 Peter1 hmm unfortunately it is working now :(
22:27 Peter1 http://pastie.org/9421139
22:27 glusterbot Title: #9421139 - Pastie (at pastie.org)
22:28 Peter1 this is what it looks like when the command is hanging
22:28 bennyturns Peter1, that is what you see in the strace?
22:28 Peter1 looks like it does find all files but waiting for getting attribute?
22:28 Peter1 yes!!!
22:29 bennyturns hmm
22:30 Peter1 what do u think?
22:30 bennyturns http://gluster.org/community/documentation/index.php/Gluster_3.2:_Troubleshooting_POSIX_ACLs
22:30 glusterbot Title: Gluster 3.2: Troubleshooting POSIX ACLs - GlusterDocumentation (at gluster.org)
22:30 bennyturns Peter1, are you mounting your bricks with noacl?
22:30 Peter1 no
22:31 bennyturns Peter1, maybe try it with -o acl like the doc says?
22:31 Peter1 wait u mean nfs mount yes
22:32 Peter1 o
22:32 bennyturns Peter1, the doc says make sure bricks are mounted with the -o acl option
22:33 Peter1 but i do no see that error
22:33 Peter1 i mean setacl
22:33 Peter1 o wait
22:33 bennyturns Peter1, the client is getting that error in the strace
22:33 Peter1 "system.posix_acl_access", 0x0, 0 ??
22:34 bennyturns Peter1, lgetxattr("publication-130927-1601.csv", "system.posix_acl_access", 0x0, 0) = -1 EOPNOTSUPP (Operation not supported)
22:34 bennyturns Peter1, hrm
22:34 Peter1 hrm ?
22:35 bennyturns hmmm
22:35 Peter1 ok :)
22:35 bennyturns kinda stumped  :P
22:35 Peter1 ya….i have been like that whole week
22:37 bennyturns Peter1, something is going on, my file create isn't complete yet
22:37 Peter1 Oo
22:38 bennyturns load is at 13 on the server
22:38 Peter1 O
22:38 Peter1 that's high
22:39 bennyturns wowo somethign is going on here
22:40 Peter1 ?
22:41 purpleidea bennyturns: ++
22:41 glusterbot purpleidea: bennyturns's karma is now 1
22:41 purpleidea bennyturns: ++
22:41 bennyturns Peter1, my log file is 5095
22:41 glusterbot purpleidea: bennyturns's karma is now 2
22:41 purpleidea bennyturns: ++
22:41 purpleidea bennyturns: ++
22:41 glusterbot purpleidea: bennyturns's karma is now 3
22:41 glusterbot purpleidea: bennyturns's karma is now 4
22:44 bennyturns Peter1, I am seeing the following errors spamming my logs:
22:44 bennyturns http://pastie.org/9421165
22:44 glusterbot Title: #9421165 - Pastie (at pastie.org)
22:45 mojibake JoeJulian: Thank you. Sorry walked away for dinner.
22:45 Peter1 o wow 3.6?!
22:45 Peter1 ya i got these error s too
22:45 Peter1 but it's warnings…?
22:45 Peter1 so i normally ignore it
22:46 theron joined #gluster
22:47 JoeJulian That confuses me because, afaict, 3.5+ doesn't seem to use the marker tree anymore for quota.
22:48 JoeJulian eh, maybe that's a red herring
22:49 bennyturns could be
22:49 bennyturns its filling up my logs though
22:49 bennyturns that needs fixed
22:49 JoeJulian What's is /file_srcdir/gqac016.sbu.lab.eng.bos.redhat.com/thrd_00/d_001/d_000/d_000/_00_10000_ an empty file?
22:50 bennyturns no it should be a 64k file
22:50 bennyturns http://pastie.org/9421183
22:50 glusterbot Title: #9421183 - Pastie (at pastie.org)
22:51 bennyturns JoeJulian, that is what I am running, its a perf benchmark for smaller files / different iops
22:51 JoeJulian cool
22:51 bennyturns created 10k per thread, 8 threads
22:51 mojibake bennyturns: Thank you to the reply about adding more nodes. However, looking to add more replicas of the brick across more nodes. Same copy of data across additional nodes, will gluster peer prode $HOSTNAME automatically do that? I thought add-brick or something was needed.
22:52 bennyturns mojibake, what JoeJulian liked talked about, his blog post
22:53 bennyturns looks like my log quit filling, 24996
22:53 bennyturns lines though :P
22:53 bennyturns ohh the file create just finished
22:54 bennyturns this is terrible though 4.286428 MB/sec
22:57 bennyturns Peter1, my ls hasn't hung yet
22:59 JoeJulian mojibake: In most cases, "you're doing it wrong" if you add more than three replicas. If you want to go from 2 to three, then probe first, then add-brick $vol replica 3 newserver:/path
22:59 Peter1 my is not hung anymore neither
22:59 Peter1 but it was
23:02 bennyturns Peter1, I am gonna run and grab some dinner, I'll biab.  FYI I ran this same smallfile test on a non quota mount the other day and it behaved normal
23:03 bennyturns Peter1, seem like something is getting wierd on quota + nfs
23:03 bennyturns [root@gqac016 gluster-mount]# cd /gluster-mount; find . -ls | wc -l
23:03 bennyturns 163294
23:04 Peter1 yes
23:04 Peter1 thanks Benny!
23:04 Peter1 maybe you can turn on quota on ur mount and retry?
23:05 JoeJulian https://plus.google.com/+JoeJulian/posts/4oM1yGG3h88
23:05 bennyturns Peter1, I have quota on
23:06 bennyturns Peter1, gonna run my create test with it disables and check thorughput when I get back
23:06 bennyturns then I'll turn it back on and compare
23:06 Peter1 thx
23:06 bennyturns Peter1, NP!  we are gonna get you up and running good :)
23:08 Peter1 i wanna cry :)
23:08 Peter1 in a thankful way
23:21 ccha3 joined #gluster
23:22 lyang01 joined #gluster
23:22 JustinCl1ft joined #gluster
23:23 hybrid5121 joined #gluster
23:23 theYAKman joined #gluster
23:24 lava_ joined #gluster
23:24 gts joined #gluster
23:24 twx_ joined #gluster
23:25 msvbhat_ joined #gluster
23:25 oxidane_ joined #gluster
23:25 tty00_ joined #gluster
23:25 bgupta joined #gluster
23:25 xavih_ joined #gluster
23:28 ThatGraemeGuy_ joined #gluster
23:29 firemanxbr joined #gluster
23:30 _NiC joined #gluster
23:30 harish_ joined #gluster
23:31 tty00 joined #gluster
23:31 mdavidson joined #gluster
23:31 [o__o] joined #gluster
23:31 XpineX joined #gluster
23:32 dcope joined #gluster
23:36 Peter1 bennyturns: can i really mount xfs in ubuntu with acl option???
23:36 Peter1 i got error on mounting
23:39 Peter1 since i m using xfs, seems like acl already enabled out of the box
23:42 bennyturns Peter1, it should be enabled afaik, but if it is you should be able to mount -o acl
23:43 Peter1 afaik?
23:43 bennyturns Peter1, http://pastie.org/9421408
23:43 glusterbot Title: #9421408 - Pastie (at pastie.org)
23:43 bennyturns afaik = as far as I know
23:44 Peter1 oic
23:44 Peter1 i was trying to put it on fstab but no lunch
23:46 bennyturns Peter1, can you try remount like I did?
23:46 Peter1 still getting the same strace after remount
23:47 Peter1 just did
23:47 bennyturns kk
23:47 Peter1 http://pastie.org/9421415
23:47 glusterbot Title: #9421415 - Pastie (at pastie.org)
23:48 bennyturns Peter1, here is what I get http://pastie.org/9421416
23:48 glusterbot Title: #9421416 - Pastie (at pastie.org)
23:49 bennyturns its gotta be something with the back end FS
23:49 bennyturns see how mine is successful?
23:49 Peter1 mine got through too but takes ver very long in the dir with 10k files
23:50 bennyturns Peter1, so you have selinux enabled on the  server?
23:50 Peter1 no
23:50 Peter1 i just figure something....
23:50 Peter1 i think
23:51 Peter1 with nfs mount only do ver=3
23:51 Peter1 and mount with acl
23:51 Peter1 ls is super fast for 10k file
23:51 Peter1 strace without those EOPNOTSUPP
23:51 bennyturns so it works normally?
23:52 Peter1 hmmm hang now
23:52 Peter1 the 1st couple times of ls super fast
23:53 Peter1 let me try ether other way with no noacl on mount and no acl on brick
23:55 bennyturns k
23:58 Peter1 do u think i should restart volume whenever i change brick mount options?
23:59 bennyturns Peter1, prolly
23:59 bennyturns be sure to bounce the brick processes as well

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary