Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 DougBishop joined #gluster
00:29 firemanxbr joined #gluster
00:30 coredump joined #gluster
00:33 Pupeno joined #gluster
00:39 Pupeno joined #gluster
00:42 firemanxbr joined #gluster
00:44 firemanxbr joined #gluster
01:08 Andreas-IPO joined #gluster
01:09 meghanam_ joined #gluster
01:09 meghanam joined #gluster
01:10 VeggieMeat joined #gluster
01:18 glusterbot New news from newglusterbugs: [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058300>
01:18 verdurin joined #gluster
01:22 rotbeard joined #gluster
01:25 uebera|| joined #gluster
01:27 capri joined #gluster
01:40 firemanxbr joined #gluster
02:08 gildub joined #gluster
02:09 overclk joined #gluster
02:09 MugginsM joined #gluster
02:09 MugginsM hi all
02:10 MugginsM so I'm running a 2 server replica on 3.4.5, and just did an add-brick and rebalance. Then all hell broke loose. clients are getting lots of file permission errors, and brick logs are full of:
02:10 MugginsM I [server-rpc-fops.c:575:server_mknod_cbk] 0-storage-server: 49291: MKNOD /media/settings/branding/favicon-lds.ico (1776f2b8-8857-4801-8c58-26​6eafcd7a87/favicon-lds.ico) ==> (Permission denied)
02:11 MugginsM I stopped the rebalance and am desperately trying to fix these :-/
02:11 MugginsM any thoughts? I'd swear it's https://bugzilla.redhat.com/show_bug.cgi?id=884597       except that's supposed to be fixed before 3.4.5
02:11 glusterbot Bug 884597: medium, medium, 3.4.0, nsathyan, CLOSED CURRENTRELEASE, dht linkfile are created with different owner:group than that source(data) file in few cases
02:12 harish joined #gluster
02:13 MugginsM it fixes it (temporarily?) if I go into the broken folder on the client and do an ls -R
02:13 MugginsM but we have quite a few clients and a full ls -R of the whole system takes hours
02:45 overclk joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.or​g/p/GlusterFS-3.6-test-doc
02:52 overclk joined #gluster
02:57 Pupeno joined #gluster
03:28 hagarth joined #gluster
03:31 meghanam joined #gluster
03:32 meghanam_ joined #gluster
03:33 kanagaraj joined #gluster
03:36 hchiramm_ joined #gluster
03:40 rejy joined #gluster
03:40 bala joined #gluster
03:42 RameshN joined #gluster
03:42 shubhendu joined #gluster
03:48 kshlm joined #gluster
03:51 ira joined #gluster
03:55 overclk joined #gluster
03:57 ppai joined #gluster
03:57 gildub joined #gluster
04:01 n-st joined #gluster
04:02 stickyboy joined #gluster
04:04 kshlm joined #gluster
04:07 kshlm joined #gluster
04:09 nbalachandran joined #gluster
04:14 MugginsM hmm, possibly related, we have files on the bricks and a few of their ----------T equivalents on other bricks with different owners
04:14 glusterbot MugginsM: --------'s karma is now -1
04:14 MugginsM oh pfft
04:20 Rafi_kc joined #gluster
04:21 rafi1 joined #gluster
04:23 jiffin joined #gluster
04:25 anoopcs joined #gluster
04:26 overclk joined #gluster
04:28 sahina joined #gluster
04:33 atinmu joined #gluster
04:36 Guest5348 joined #gluster
04:37 nishanth joined #gluster
04:43 XpineX joined #gluster
04:52 anoopcs joined #gluster
04:59 ndarshan joined #gluster
05:06 meghanam joined #gluster
05:06 meghanam_ joined #gluster
05:07 shubhendu joined #gluster
05:08 karnan joined #gluster
05:13 glusterbot New news from resolvedglusterbugs: [Bug 842206] glusterfsd: page allocation failure <https://bugzilla.redhat.com/show_bug.cgi?id=842206>
05:16 bala joined #gluster
05:18 ira joined #gluster
05:18 Guest5348 joined #gluster
05:30 saurabh joined #gluster
05:45 hagarth joined #gluster
05:49 glusterbot New news from newglusterbugs: [Bug 1158746] client process will hang if server is started to send the request before completing connection establishment. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158746>
05:52 atalur joined #gluster
05:54 kdhananjay joined #gluster
06:11 soumya joined #gluster
06:13 overclk joined #gluster
06:14 anoopcs1 joined #gluster
06:16 anoopcs1 joined #gluster
06:16 eightyeight joined #gluster
06:19 anoopcs joined #gluster
06:19 kiran_ joined #gluster
06:21 anoopcs joined #gluster
06:22 anoopcs joined #gluster
06:24 haomaiwa_ joined #gluster
06:24 Humble kiran++
06:24 glusterbot Humble: kiran's karma is now 1
06:29 Guest5348 joined #gluster
06:30 JonathanD joined #gluster
06:31 ira joined #gluster
06:48 ricky-ticky joined #gluster
06:51 kiran_ Humble++
06:51 glusterbot kiran_: Humble's karma is now 3
06:53 kumar joined #gluster
06:57 nshaikh joined #gluster
07:08 stickyboy joined #gluster
07:13 rgustafs joined #gluster
07:18 atinmu joined #gluster
07:20 Guest5348 joined #gluster
07:22 raghu joined #gluster
07:22 hollaus joined #gluster
07:27 stickyboy joined #gluster
07:32 Fen2 joined #gluster
07:35 rcaskey joined #gluster
07:42 atinmu joined #gluster
07:49 Guest5348 joined #gluster
07:49 R0ok_ joined #gluster
07:54 hybrid512 joined #gluster
08:00 smallbig joined #gluster
08:11 nshaikh joined #gluster
08:12 ricky-ticky1 joined #gluster
08:26 lalatenduM joined #gluster
08:34 hollaus joined #gluster
08:41 vikumar joined #gluster
08:44 glusterbot New news from resolvedglusterbugs: [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
08:56 atinmu joined #gluster
09:04 Pupeno joined #gluster
09:11 deniszh joined #gluster
09:22 Philambdo joined #gluster
09:25 atinmu joined #gluster
09:26 Fen2 Hi all ! Why is RAID 6 recommended ? Is RAID 10 not better ?
09:33 kshlm joined #gluster
09:34 kshlm joined #gluster
09:35 dogmatic69 Fen2:  where do you see it recommended?
09:35 Fen2 dogmatic69: here : https://access.redhat.com/articles/66206
09:35 glusterbot Title: Red Hat Storage Server 3.0 Compatible Physical, Virtual Server and Client OS Platforms - Red Hat Customer Portal (at access.redhat.com)
09:36 Fen2 What is the common use ?
09:37 dogmatic69 Fen2:  maybe because raid 6 gives more space, but that is not a gluster recomendation so who knows.
09:38 Fen2 Yeah maybe but RAID 10 is faster than RAID 6 so it's weird
09:39 dogmatic69 well redhat is weird :P
09:40 lalatenduM joined #gluster
09:47 afics joined #gluster
09:48 Guest5348 joined #gluster
09:48 hagarth joined #gluster
09:49 atinmu joined #gluster
09:50 aravindavk joined #gluster
09:51 shubhendu joined #gluster
09:54 haomai___ joined #gluster
09:55 meghanam joined #gluster
09:55 meghanam_ joined #gluster
09:56 ppai joined #gluster
09:57 dusmant joined #gluster
10:01 Debolaz Fen2: If you can afford it, RAID 10 is better than RAID 6. RAID 6 is a poor mans RAID 10. :)
10:15 rgustafs joined #gluster
10:22 ira joined #gluster
10:43 kshlm joined #gluster
10:44 kkeithley1 joined #gluster
10:50 glusterbot New news from newglusterbugs: [Bug 1158831] gnfs : nfs mount fails if the connection between nfs server and bricks is not established <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158831>
10:51 atinmu joined #gluster
10:53 ppai joined #gluster
10:54 bene2 joined #gluster
10:57 LebedevRI joined #gluster
11:00 mojibake joined #gluster
11:01 atrius joined #gluster
11:02 haomaiwa_ joined #gluster
11:13 lpabon joined #gluster
11:14 jvandewege joined #gluster
11:19 shubhendu joined #gluster
11:20 jvandewege_ joined #gluster
11:30 karnan joined #gluster
11:32 SOLDIERz joined #gluster
11:35 an joined #gluster
11:37 abyss_ joined #gluster
11:44 lalatenduM joined #gluster
11:47 meghanam_ joined #gluster
11:47 meghanam joined #gluster
11:49 Guest5348 joined #gluster
11:55 SOLDIERz joined #gluster
11:58 soumya joined #gluster
11:59 magamo Just would like to say thank you for the help last evening.  Looks like my volume heal happened overnight, and all looks good and right in the world.
12:05 virusuy joined #gluster
12:08 edward1 joined #gluster
12:16 meghanam joined #gluster
12:18 meghanam_ joined #gluster
12:21 Debolaz magamo: I didn't help any, but I'd be more than happy to take credit. Thank you. ;)
12:21 RameshN joined #gluster
12:30 deniszh1 joined #gluster
12:39 atalur joined #gluster
12:39 haomaiwa_ joined #gluster
12:45 RameshN joined #gluster
12:49 hagarth joined #gluster
12:49 meghanam joined #gluster
12:49 meghanam_ joined #gluster
12:55 Bardack joined #gluster
12:56 theron joined #gluster
12:56 morse joined #gluster
12:57 delhage joined #gluster
12:57 spoofedpacket joined #gluster
12:58 bene2 joined #gluster
12:59 lalatenduM joined #gluster
13:01 Fen1 joined #gluster
13:03 an joined #gluster
13:06 coredump joined #gluster
13:08 meghanam_ joined #gluster
13:17 chirino joined #gluster
13:20 an joined #gluster
13:23 failshell joined #gluster
13:28 kshlm joined #gluster
13:33 plarsen joined #gluster
13:42 ron-slc joined #gluster
13:47 _dist joined #gluster
13:53 calisto joined #gluster
13:54 harish_ joined #gluster
13:55 bennyturns joined #gluster
13:56 julim joined #gluster
13:58 theron joined #gluster
14:04 _dist joined #gluster
14:08 raghu_ joined #gluster
14:12 mbukatov joined #gluster
14:13 xleo joined #gluster
14:14 jobewan joined #gluster
14:16 bazzles joined #gluster
14:19 sazze joined #gluster
14:20 joel__ joined #gluster
14:21 sazze Hi all, I have failry odd issue with a gluster brick -- the replica 2 volume is 576gb full of about 1tb, this brick's folder du -h | tail -n 1 shows 576gb, but the system's df -h shows 533gb on that dedicated mount  BAFFLED.  a  little help pls
14:22 wushudoin joined #gluster
14:26 theron_ joined #gluster
14:28 _Bryan_ joined #gluster
14:30 sazze hi, a recently healed replica2  brick is showing different du -h vs df -h for the data point -- any ideas why?  don't wanna lose data in this upgrade...
14:31 lalatenduM joined #gluster
14:33 DougBishop joined #gluster
14:35 haomaiwa_ joined #gluster
14:37 jiffe is there a way to administratively take a brick offline?
14:38 jiffe one of our machines is in a quasy dead state so I can't get to it and the gluster volumes on it are down
14:43 haomai___ joined #gluster
14:44 bala joined #gluster
14:45 jiffe that machine is fully dead now so all is well[ish]
14:45 B21956 joined #gluster
14:53 sazze hi jiffe
14:53 xleo left #gluster
14:54 sazze I'm here to get some advice myself, but if your machine died, glusterd stopped, which allows you certain types of service access
14:55 sazze on your good server, run command "gluster volume info" and "gluster volume status"
14:55 sazze share that...
14:55 sazze hi gluster support, my disk usage on a dedicated brick mount is not matchined du -h vs df -h -- any ideas???
15:07 deepakcs joined #gluster
15:10 hagarth joined #gluster
15:13 jbrooks_ joined #gluster
15:15 lmickh joined #gluster
15:15 bazzles joined #gluster
15:18 msmith_ joined #gluster
15:21 sazze joined #gluster
15:35 soumya joined #gluster
15:36 an joined #gluster
15:41 bennyturns joined #gluster
15:43 kumar joined #gluster
15:44 Norky joined #gluster
15:49 sazze hagarth, does gluster some how set size attr that might trick df to show less than du?
15:59 rwheeler joined #gluster
16:09 soumya joined #gluster
16:11 an joined #gluster
16:16 bennyturns joined #gluster
16:31 deniszh joined #gluster
16:31 ttk iirc, du reflects sparse files' apparent sizes, and df reflects space actually used in a filesystem.  Does gluster use sparse files?
16:32 * ttk was wrong; du requires the --apparent-size flag to get that behavior
16:32 ttk n/m
16:33 ttk oh .. but the -b flag implicitly sets --apparent-size
16:33 ttk but -h does not
16:39 zerick joined #gluster
16:40 sputnik13 joined #gluster
16:44 hagarth joined #gluster
16:45 calisto joined #gluster
16:50 haomaiwa_ joined #gluster
16:57 _dist joined #gluster
17:02 lalatenduM joined #gluster
17:03 theron joined #gluster
17:05 JoeJulian ttk: Yes, gluster will create sparse files in the same way as a local filesystem will. Further, the self-heal process may create sparse files, or sparseness within a file, as part of the diff-healing.
17:07 semiosis ttk: JoeJulian: sazze isn't here anymore
17:08 haomai___ joined #gluster
17:08 diegows joined #gluster
17:11 JoeJulian I didn't scroll back that far... ;)
17:21 justinmburrous joined #gluster
17:26 jonb joined #gluster
17:28 hgarrow joined #gluster
17:34 hgarrow left #gluster
17:44 chirino joined #gluster
17:50 michaellotz joined #gluster
17:50 sputnik1_ joined #gluster
17:51 michaellotz helo, i have a big  problem. Brick Status: Transport endpoint is not connected. How can i fix this problem?#
17:51 michaellotz the brick is on the node it self
17:52 michaellotz gluster volume info
17:52 michaellotz
17:52 michaellotz Volume Name: gv0
17:52 michaellotz Type: Distributed-Replicate
17:52 michaellotz Volume ID: 69c48d52-560b-4e70-8c06-3f4d2ea82d48
17:52 michaellotz Status: Started
17:52 michaellotz Number of Bricks: 2 x 2 = 4
17:52 michaellotz Transport-type: tcp
17:54 tomased joined #gluster
17:54 JoeJulian michaellotz: unmount and mount again.
17:54 JoeJulian Sounds like the client crashed.
17:55 sputnik13 joined #gluster
18:01 DougBishop joined #gluster
18:02 rshott joined #gluster
18:02 michaellotz JoeJulian: thx
18:02 michaellotz JoeJulian: i love you ;)
18:03 hollaus joined #gluster
18:03 _dist joined #gluster
18:04 michaellotz JoeJulian: now i have : 0-glusterfs: transport.address-family not specified.
18:05 michaellotz i started gluster volume heal gv0
18:05 sputnik1_ joined #gluster
18:08 JoeJulian The "not specified" is normal. It just means it's defaulting to tcp.
18:08 hgarrow joined #gluster
18:09 michaellotz the healing takes a long time is this normal?
18:10 JoeJulian depends on use case
18:11 JoeJulian I'm not sure why it's healing since you didn't take any servers down (did you?).
18:11 michaellotz another thing in the log: Unable to get lock for uuid: 60a186aa-f714-44af-b1fe-3422702e7ced
18:11 _dist michaellotz: our file volume heals in minutes, our VM volume heals in 18-24 hours regardless of how long the brick is down
18:11 michaellotz the file isnt present at no brick
18:12 JoeJulian well that would make it hard to lock then, wouldn't it....
18:12 JoeJulian :D
18:15 michaellotz ok, after heal operation, we have no answer of success. is this a fail?
18:15 JoeJulian It is not.
18:22 hollaus joined #gluster
18:25 michaellotz gluster volume status has no answer. have you any idea?
18:27 magamo michaellotz: Have you tried any of the 'gluster volume heal info' options?
18:29 michaellotz gluster volume heal gv0 info healed . is running currently
18:32 michaellotz gluster volume heal gv0 info healed
18:32 michaellotz Another transaction could be in progress. Please try again after sometime.
18:32 michaellotz narv
18:39 ekuric joined #gluster
18:40 virusuy joined #gluster
18:40 virusuy joined #gluster
18:42 coredump joined #gluster
19:02 firemanxbr joined #gluster
19:03 MrAbaddon joined #gluster
19:07 wushudoin joined #gluster
19:13 chirino joined #gluster
19:23 hollaus joined #gluster
19:24 n-st joined #gluster
19:26 an joined #gluster
19:32 julim joined #gluster
19:33 firemanxbr left #gluster
19:34 jonb Hello, my name is Jon. It's my first time here in IRC but I have been using Gluster for almost a year now and am a big fan. However I am currently having a bit of trouble with Gluster, Samba, and the VFS plug-in between them and am not 100% sure where to go from here. Since I found the samba/Gluster project on Gluster Forge I figured I'd come here for advice.
19:35 ndevos welcome jonb!
19:35 jonb Thanks
19:35 jonb The behavior I am seeing is that by using the samba-vfs-gluster plug-in the case sensitivity setting for Samba breaks in a subtle but noticeable way. Windows clients are unable to do a "does file exist" check when one of the directory names in the path is not the correct case (/RELEASE_1234/ vs /release_1234/). An old Delphi client program first exposed this bug but I've been able to reproduce it in Java. The confusing part to me is Windows explorer is
19:36 jonb able to navigate straight to the file even if the incorrect case is used and once Windows has opened a connection to that directory subsequent file look-ups succeed even when using the incorrect case. I suspect there is Windows or Samba caching magic happening there, but I'm no samba expert. Mounting the Gluster volume to one of the nodes locally then sharing that through Samba fixes the problem but I've found performance to be terrible.
19:38 ndevos urgh, definitely sounds like a samba config option of some kind... but thats not something I'm very familiar with
19:39 ndevos I'm not sure how many samba users there are in this channel, in case you do not have an answer soon, send an email to gluster-users@gluster.org about this
19:39 semiosis Name Mangling and Case, the Samba Book - http://www.oreilly.com/openb​ook/samba/book/ch05_04.html
19:39 glusterbot Title: [Chapter 5] 5.4 Name Mangling and Case (at www.oreilly.com)
19:40 semiosis see also man smb.conf
19:42 jonb I will have to read up on the name manling, I was aware of it but didn't pursue it to far thinking (or at least hoping) that was confined to writes.
19:42 jonb Thank you ndevos, I will email to that unless something pops up around here before then.
19:43 nueces joined #gluster
19:43 jonb Does anyone know of an IRC or forum for Samba developers? I only ran into BugZilla during my searches.
19:45 ndevos jonb: you can join #samba for that :)
19:46 jonb So obvious I never would have thought of that.
19:53 B21956 joined #gluster
19:56 hgarrow left #gluster
20:05 hollaus joined #gluster
20:06 chirino joined #gluster
20:12 justinmburrous joined #gluster
20:13 rshott joined #gluster
20:20 justinmburrous joined #gluster
20:32 PeterA joined #gluster
20:35 hollaus joined #gluster
20:36 bene2 joined #gluster
20:39 LebedevRI joined #gluster
20:57 Pupeno_ joined #gluster
21:01 MugginsM joined #gluster
21:03 hollaus joined #gluster
21:14 firemanxbr joined #gluster
21:38 msmith_ joined #gluster
21:39 msmith_ joined #gluster
21:45 badone joined #gluster
21:46 hollaus joined #gluster
21:47 rotbeard joined #gluster
21:50 badone_ joined #gluster
22:17 hollaus joined #gluster
22:34 calisto joined #gluster
22:52 hollaus joined #gluster
23:09 gildub joined #gluster
23:17 bennyturns joined #gluster
23:39 msmith_ joined #gluster
23:41 bennyturns joined #gluster
23:46 Pupeno joined #gluster
23:46 rotbeard joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary