Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 sprachgenerator ok - that was it - I thought under fedora there were two, one for fs and another for d
00:07 khushildep joined #gluster
00:16 zaitcev joined #gluster
00:21 uebera|| joined #gluster
00:22 tyl0r joined #gluster
00:50 diegows_ joined #gluster
01:07 chirino joined #gluster
01:19 uebera|| joined #gluster
01:19 satheesh1 joined #gluster
01:29 andreask joined #gluster
01:29 Rio_S2 joined #gluster
01:54 mjrosenb [2013-11-13 17:33:49.892755] W [fuse-bridge.c:713:fuse_fd_cbk] 0-glusterfs-fuse: 897459: OPEN() /incoming/sorted/tv/TAS/Season 2/06 - The Counter-Clock Incident.avi => -1 (No such file or directory)
01:54 mjrosenb :-(
01:54 JoeJulian Pfft... you didn't want the xvid anyway...
01:58 JoeJulian Heh, I didn't even know that existed. :/
01:58 mjrosenb also in there:
01:58 mjrosenb [2013-11-13 17:33:49.871440] W [client3_1-fops.c:1059:client3_1_getxattr_cbk] 0-magluster-client-1: remote operation failed: No such file or directory. Path: /incoming/sorted/tv/TAS/Season 2/04 - Albatross.avi (0664b56b-dd94-4652-b679-6fc5265eb9bf). Key: trusted.glusterfs.dht.linkto
01:59 davidbie_ joined #gluster
02:01 chirino joined #gluster
02:02 harish_ joined #gluster
02:18 mjrosenb this is strange. the file only exists on one brick, it isn't a link or anything.  it is the full file!
02:18 mjrosenb also, I can access it on another client
02:18 asias joined #gluster
02:19 JoeJulian Sounds like your client isn't connected to one of your bricks.
02:26 mjrosenb how can I ask it what bricks it is on speaking terms with?
02:30 xymox joined #gluster
02:30 _Bryan__ joined #gluster
02:45 REdOG joined #gluster
02:50 hagarth joined #gluster
03:02 rjoseph joined #gluster
03:03 kshlm joined #gluster
03:06 sgowda joined #gluster
03:08 JoeJulian mjrosenb: netstat, gluster volume status clients
03:21 nueces joined #gluster
03:23 mjrosenb JoeJulian: do that on the bricks?
03:24 shubhendu joined #gluster
03:44 bharata-rao joined #gluster
03:45 shylesh joined #gluster
03:48 itisravi joined #gluster
03:52 djgiggle joined #gluster
03:53 djgiggle Hi, I know that for the server, I have to allocate space for the bricks. How about for the clients? Do I need to allocate space as well?
03:58 itisravi djgiggle:You don't need to. Clients just need to mount the volume to access it.
03:59 djgiggle itisravi: thank yo
03:59 djgiggle you
03:59 itisravi djgiggle: welcome:)
04:04 RameshN joined #gluster
04:05 dusmant joined #gluster
04:06 bulde joined #gluster
04:25 shruti joined #gluster
04:31 vpshastry joined #gluster
04:35 hagarth joined #gluster
04:42 raghu joined #gluster
04:42 davidbierce joined #gluster
04:47 Glusted joined #gluster
04:47 Glusted Hello, I have questions about the following command: gluster volume heal myvolume info heal-failed
04:47 Glusted Anyone knows what it is supposed to return?
04:50 Glusted pS: I have glusterfs 3.4.0
04:50 kanagaraj joined #gluster
04:53 aravindavk joined #gluster
04:54 satheesh1 joined #gluster
04:54 meghanam joined #gluster
04:54 meghanam_ joined #gluster
04:59 satheesh joined #gluster
05:13 ppai joined #gluster
05:14 ababu joined #gluster
05:18 Glusted #gluster-dev
05:19 gunthaa__ joined #gluster
05:24 bigclouds joined #gluster
05:24 bigclouds gluster peer status
05:24 bigclouds peer status: failed
05:25 bigclouds why? the cluster works now
05:26 bigclouds also  'gluster volume info all'  get nothing
05:28 bala joined #gluster
05:30 rcoup joined #gluster
05:31 mohankumar__ joined #gluster
05:31 * REdOG spent all day reading up on gluster
05:36 nshaikh joined #gluster
05:41 bigclouds gluster peer probe host, return 1.
05:41 bigclouds nothing more to read
05:49 Shri joined #gluster
05:49 ndarshan joined #gluster
05:52 kevein joined #gluster
05:55 lalatenduM joined #gluster
06:11 saurabh joined #gluster
06:12 CheRi joined #gluster
06:13 nueces left #gluster
06:22 gunthaa__ joined #gluster
06:34 bulde joined #gluster
06:44 rastar joined #gluster
06:46 satheesh1 joined #gluster
06:50 raghu joined #gluster
07:00 hagarth bigclouds: glusterd's log file is a good place to look up why it failed
07:12 ngoswami joined #gluster
07:13 ababu joined #gluster
07:17 psharma joined #gluster
07:20 ThatGraemeGuy joined #gluster
07:25 vshankar joined #gluster
07:30 klaxa|work joined #gluster
07:32 jtux joined #gluster
07:33 klaxa|work where did i change the debug level of the self-heal daemon in 3.3.2 again? /var/lib/glusterd/glustershd/glustershd-server.vol ?
07:34 mbukatov joined #gluster
08:05 Glusted joined #gluster
08:05 Glusted #gluster-dev
08:07 eseyman joined #gluster
08:09 ricky-ticky joined #gluster
08:10 franc joined #gluster
08:10 franc joined #gluster
08:11 Glusted h
08:13 ctria joined #gluster
08:19 vpshastry joined #gluster
08:22 mgebbe joined #gluster
08:26 vshankar joined #gluster
08:41 eryc_ joined #gluster
08:41 vpagan_ joined #gluster
08:44 sweeper joined #gluster
08:44 jbrooks joined #gluster
08:44 Technicool joined #gluster
08:44 jag3773 joined #gluster
08:44 [o__o] joined #gluster
08:44 klaxa joined #gluster
08:44 nonsenso_ joined #gluster
08:44 RobertLaptop joined #gluster
08:44 edong23 joined #gluster
08:44 bdperkin joined #gluster
08:44 tru_tru joined #gluster
08:45 kshlm joined #gluster
08:45 bulde joined #gluster
08:45 rastar joined #gluster
08:45 raghu joined #gluster
08:45 ngoswami joined #gluster
08:45 psharma joined #gluster
08:45 mbukatov joined #gluster
08:45 Glusted joined #gluster
08:45 ctria joined #gluster
08:45 glusterbot New news from newglusterbugs: [Bug 1030228] numerous fd's opened on samba-gluster log file by smbd process <http://goo.gl/ptx0yu>
08:45 vshankar joined #gluster
08:46 hybrid5121 joined #gluster
08:46 Skaag joined #gluster
08:46 T0aD joined #gluster
08:46 kkeithley joined #gluster
08:46 ofu_ joined #gluster
08:49 zwu joined #gluster
08:56 satheesh1 joined #gluster
09:07 bharata_ joined #gluster
09:07 Glusted what is glusterbot?
09:07 calum_ joined #gluster
09:11 jtux joined #gluster
09:15 andreask joined #gluster
09:21 hagarth joined #gluster
09:23 vpshastry1 joined #gluster
09:23 Glusted Anyone knows about "heal-failed" status?
09:24 ekuric joined #gluster
09:40 ndarshan joined #gluster
09:41 andreask joined #gluster
09:42 lanning joined #gluster
09:48 geewiz joined #gluster
10:00 haritsu joined #gluster
10:00 bgpepi joined #gluster
10:04 ngoswami joined #gluster
10:16 satheesh1 joined #gluster
10:18 bluedev joined #gluster
10:19 bluedev I just looked through some of the gluster documentation and it looks really nice
10:19 bluedev is there a doc about geographical replication?
10:21 Nev joined #gluster
10:22 Nev hm, about volume info status
10:22 Nev the nfs part, in a replicate setup, should the NFS Server on localhost, be online on 2 nodes or only on 1 ?!
10:23 Nev__ joined #gluster
10:23 Nev__ baout volume info command, the nfs part, in a replicate setup, should the NFS Server on localhost, be online on 2 nodes or only on 1 ?!
10:24 Nev__ about volume info
10:25 Nev__ err, volume status
10:31 andreask joined #gluster
10:35 haritsu joined #gluster
10:40 Remco Nev__: If you want to be able to do NFS failover, you should run it on both nodes
10:43 Nev__ so they dont interfere with each other
10:43 Nev__ ok, so i will start the other one as well
10:49 Nev__ uhm, but how to start the local nfs server?!
10:49 Nev__ volume set lims1 nfs.disable off  ,, does nothing
10:50 jkroon joined #gluster
10:50 jkroon hi guys
10:50 jkroon how does gluster deal with hard links.
10:51 jkroon as I understand it, a distribute hashes based on the filename, so what if the two filenames maps to two different bricks?
10:55 RicardoSSP joined #gluster
11:01 ababu joined #gluster
11:03 Remco jkroon: http://gluster.org/community/documentat​ion/index.php/Arch/Glusterfs_Hard_Links
11:03 glusterbot <http://goo.gl/1GXVKC> (at gluster.org)
11:03 * Remco hasn't used hardlinks in gluster himself
11:04 Remco Nev__: I think it requires you to restart gluster
11:05 Remco It needs some nfs-common things, so make sure you have everything installed before you do that
11:06 diegows_ joined #gluster
11:13 Nev__ the wirred thing is, the localhost nfs server is stopped, the replicate interface 10.0.0.94 is started on one host, on the other host its the other way arround
11:15 glusterbot New news from newglusterbugs: [Bug 1026291] quota: directory limit cross, while creating data in subdirs <http://goo.gl/hesUtT>
11:16 hagarth joined #gluster
11:17 geewiz Hi! What's the best way of resolving split-brain errors with a directory?
11:35 ababu joined #gluster
11:36 haritsu joined #gluster
11:37 kkeithley1 joined #gluster
11:38 jtux joined #gluster
11:46 rastar joined #gluster
11:47 haritsu joined #gluster
11:52 ndarshan joined #gluster
11:54 rcheleguini joined #gluster
12:03 itisravi_ joined #gluster
12:05 hagarth joined #gluster
12:10 andreask joined #gluster
12:16 glusterbot New news from newglusterbugs: [Bug 955548] adding host uuids to volume status command xml output <http://goo.gl/rZS9c> || [Bug 990028] enable gfid to path conversion <http://goo.gl/1HwiQc> || [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
12:16 ctria joined #gluster
12:17 ngoswami joined #gluster
12:19 ppai joined #gluster
12:21 getup- joined #gluster
12:25 kevein joined #gluster
12:29 ira joined #gluster
12:29 haritsu joined #gluster
12:41 vimal joined #gluster
12:44 vimal joined #gluster
12:46 haritsu joined #gluster
12:49 _Bryan__ joined #gluster
12:53 chirino joined #gluster
12:59 andreask joined #gluster
13:01 pdrakeweb joined #gluster
13:04 samsamm joined #gluster
13:07 rcheleguini joined #gluster
13:09 kkeithley1 joined #gluster
13:10 shylesh joined #gluster
13:13 calum_ joined #gluster
13:15 hagarth joined #gluster
13:20 B21956 joined #gluster
13:23 getup- hi, what can you do with gluster to prevent a split brain from happening at all?
13:26 tqrst JoeJulian: no crashes or memory leaks yet with fix-layout, yay. I am seeing a whole lot of "[dht-linkfile.c:213:dht_linkfile_setattr_cbk] 0-$volname-dht: setattr of uid/gid on $filename :<gfid:00000000-0000-0000-0000-000000000000> failed (Invalid argument)", though. Seems readable just fine.
13:27 tqrst rummaging through my bricks, I can see that one pair has the files proper, while the other pair has an empty file with a {m,c}time of today and root/root as owner. hrm.
13:31 tqrst this is a folder that goes all the way back to when we were running 3.2 and had a ton of weird issues, though, so I'm not too surprised
13:33 harish_ joined #gluster
13:36 getup- joined #gluster
13:41 jkroon Remco, new setup.  Currently have a 8TB single storage unit, out of space, so looking to distribute it over multiple machines, however, the application relies heavily on hard linking.
13:41 tqrst http://bpaste.net/show/C7i0gXN8S0UvbsuXgEOg/
13:41 glusterbot Title: Paste #C7i0gXN8S0UvbsuXgEOg at spacepaste (at bpaste.net)
13:43 Remco jkroon: On the link I gave it does say "Hard links for the new path are created on the same brick that contains the existng inode."
13:44 calum_ joined #gluster
13:45 Remco But you should probably talk to the resident experts in here that might still be asleep
13:45 jkroon Remco, yea, with some link file.  I don't have GeoRep so I should be OK.
13:49 morse joined #gluster
13:50 hchiramm__ joined #gluster
13:58 recidive joined #gluster
14:01 davidbierce joined #gluster
14:05 tqrst if I'm archiving a bunch of folders on my volume, would I be better off tarring to /tmp and then copying back, or tarring in place?
14:05 bennyturns joined #gluster
14:09 Remco tqrst: I think the copy would be better
14:10 tqrst Remco: that's what I've assumed for a while, but figured there might have been performance improvements for incremental writes at some point
14:10 Remco If tar writes in large blocks, it might not matter
14:11 Remco It's more about the amount of reads and writes than the size
14:11 Remco Also keep in mind that writing will consume bandwidth towards all replica's
14:12 Remco I guess you should test and write about it :)
14:13 tqrst I unfortunately can't really run tests until this is done archiving and rebalancing
14:38 Glusted joined #gluster
14:39 kkeithley2 joined #gluster
14:39 Glusted Hi again, can anyone please clarify the output of command gluster ... heal myvolume full
14:39 Glusted then "myvolume info heal-failed
14:40 kkeithley1 joined #gluster
14:40 jag3773 joined #gluster
14:40 hybrid512 joined #gluster
14:46 mohankumar__ joined #gluster
14:50 Glusted Hello, I have questions about the following command: gluster volume heal myvolume info heal-failed
14:50 Glusted 5:56 am
14:50 Glusted Anyone knows what it is supposed to return?
14:50 Glusted 5:58 am
14:50 Glusted pS: I have glusterfs 3.4.0
14:50 bugs_ joined #gluster
14:52 mattf joined #gluster
14:54 dbruhn joined #gluster
14:57 diegows_ joined #gluster
15:04 hagarth Glusted: can you send out an email on gluster-users?
15:07 sprachgenerator joined #gluster
15:11 getup- joined #gluster
15:11 calum_ joined #gluster
15:11 lpabon joined #gluster
15:14 premera joined #gluster
15:15 straylyon joined #gluster
15:16 jbautista|brb joined #gluster
15:18 lpabon joined #gluster
15:21 failshell joined #gluster
15:43 andreask joined #gluster
15:45 LoudNois_ joined #gluster
15:47 bala joined #gluster
15:49 rjoseph joined #gluster
15:57 ctria joined #gluster
16:01 recidive joined #gluster
16:06 xymox joined #gluster
16:07 sjoeboo joined #gluster
16:13 xymox joined #gluster
16:13 andreask joined #gluster
16:20 haritsu joined #gluster
16:22 straylyo_ joined #gluster
16:25 haritsu joined #gluster
16:42 straylyon joined #gluster
16:43 zaitcev joined #gluster
16:58 aliguori joined #gluster
17:07 davidbierce joined #gluster
17:24 gkleiman joined #gluster
17:28 kaptk2 joined #gluster
17:28 Mo__ joined #gluster
17:29 marbu joined #gluster
17:33 hagarth joined #gluster
17:37 jskinner_ joined #gluster
17:44 calum_ joined #gluster
17:47 glusterbot New news from newglusterbugs: [Bug 1030580] Feature request (CLI): Add an option to the CLI to fetch just incremental or cumulative I/O statistics <http://goo.gl/smvSH1>
17:51 jag3773 joined #gluster
17:57 andreask joined #gluster
18:16 calum_ joined #gluster
18:21 recidive joined #gluster
18:28 kanagaraj joined #gluster
18:45 brimstone joined #gluster
18:45 brimstone is communication between servers only encrypted in 3.4 and later? (server.ssl option)
18:53 andreask joined #gluster
18:55 tyl0r joined #gluster
19:21 kaptk2 joined #gluster
19:21 aliguori joined #gluster
19:37 geewiz joined #gluster
19:38 geewiz Hi! What's the best way of resolving split-brain errors with a directory? I'm not really keen on deleting the whole tree on the "bad" brick...
19:42 zerick joined #gluster
19:45 Technicool joined #gluster
19:53 bgpepi joined #gluster
19:54 sprachgenerator has anyone ever seen this issue on a gluster replacement peer: [xlator.c:390:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
19:56 sprachgenerator my volume file for glusterd.vol is identical on both the working and non-working node
19:56 JoeJulian geewiz: I set the trusted.afr attributes to all zeros on the "bad" brick(s) for the affected directory.
19:57 JoeJulian brimstone: Yes, ssl is a new feature with 3.4.
19:58 geewiz JoeJulian: Provided there's only a gfid conflict, right? What if there are metadata differences, e.g. mtime?
19:58 JoeJulian geewiz: The one without all zeros will update the other(s).
19:59 JoeJulian And no. A gfid conflict would require more effort.
19:59 skered- We have a DEC Alpha machine that can't connect to a gluster nfs export.. mount outputs "Cannot MNT PRC: RPC: Program not registered"
19:59 skered- Any idea what that means?
20:00 geewiz JoeJulian: The actual error message is "Unable to self-heal permissions/ownership".
20:01 JoeJulian skered-: Maybe it's not trying version 3?
20:01 geewiz JoeJulian: And, after the file path: "(possible split-brain). Please fix the file on all backend volumes" And I wonder how to actually perform this fix.
20:01 JoeJulian geewiz: I set the trusted.afr attributes to all zeros on the "bad" brick(s) for the affected directory.
20:02 skered- JoeJulian: That's what I thought but I can mount another "normal" nfs from a Linux box.
20:02 JoeJulian skered-: The kernel nfs service also provides v4
20:02 skered- Is there a way to confirm that's nfsv3?  I have "... -o nfsv3,..." on the dec alpha mount command
20:02 geewiz JoeJulian: And this will fix the whole directory node? That's brilliant. I'll try that.
20:04 JoeJulian skered-: I'm afraid that I know nothing about the os you have on the alpha, nor what its requirements might be. If I were diagnosing it, I would use wireshark.
20:06 skered- JoeJulian: I starting looking at tcpdump and it's trying to do something via UDP sunrpc and that's it
20:08 JoeJulian Have you checked to make sure that another linux box can mount nfs?
20:08 skered- Yes
20:10 JoeJulian oh.. udp... needs to be tcp
20:10 calum_ joined #gluster
20:11 recidive joined #gluster
20:11 skered- JoeJulian: Yeah, looks like there's a mount option "proto=tcp" for Alpha's mount
20:11 skered- Seems like the same option I'm use to in Linux
20:11 skered- Even though that doesn't fix it :/
20:13 semiosis ,,(nfs)
20:13 glusterbot To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
20:13 dbruhn joined #gluster
20:13 semiosis skered-: try -v?
20:14 semiosis also could look at the log, /var/log/glusterfs/nfs.log, on the server
20:14 skered- semiosis: Nothing there.  -v just shows the options
20:15 semiosis what os is the alpha machine running?
20:16 skered- V4.0 878 alpha
20:17 semiosis interesting
20:18 geewiz JoeJulian: So, I reset the xattrs via "setfattr -n trusted.afr.vol-client-0=0​x000000000000000000000000 $FILE"?
20:19 JoeJulian yes
20:30 cyberbootje joined #gluster
20:32 badone joined #gluster
20:33 semiosis skered-: whats the whole mount command?
20:34 jag3773 joined #gluster
20:40 diegows joined #gluster
20:59 MarkR joined #gluster
21:00 ThatGraemeGuy joined #gluster
21:05 MarkR e
21:06 MarkR On a lot of our hosts, glusterfs (and only gluster) has network issues:
21:06 MarkR $ sudo netstat -pan|grep SYN_SENT
21:06 MarkR tcp        0      1 10.0.0.70:1023        10.0.0.23:24007       SYN_SENT    2510/glusterfs
21:07 MarkR This seem to be conntrack related, but I can't figure out how to get it right.
21:08 MarkR If I manually umount/mount, the connection does get established properly.
21:11 MarkR Someone ideas where to start looking?
21:14 MarkR Glusterfs 3.3.2-ubuntu1~precise2
21:20 MarkR iptables logs contain a lot of:
21:20 MarkR Nov 14 22:19:18 app10 kernel: [1842002.423704] [INPUT] dropped IN=eth0 OUT= MAC=... SRC=10.0.0.23 DST=10.0.0.70 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=24007 DPT=1023 WINDOW=14480 RES=0x00 ACK SYN URGP=0
21:25 haritsu joined #gluster
21:26 Remco Did you try opening all the ports gluster needs?
21:33 MarkR I guess so (http://ur1.ca/g1cw0) Remco
21:33 glusterbot Title: #54154 Fedora Project Pastebin (at ur1.ca)
21:34 Remco ,,(ports)
21:34 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
21:34 DataBeaver What's up with messages like this: [2013-11-14 21:32:03.644259] W [fuse-bridge.c:2127:fuse_writev_cbk] 0-glusterfs-fuse: 39270216: WRITE => -1 (Bad file descriptor)  My logs get spammed with a lot of these, around 5 MB/s at the moment.
21:35 brimstone what are popular solutions for encrypting traffic between bricks and clients?
21:36 Remco MarkR: You don't have an accept all for gluster?
21:36 MarkR Well, on the client, I have a OUTPUT rule to accept 24007-24010 (for two bricks)
21:36 MarkR Don't use NFS
21:37 Remco Is this a test setup?
21:38 MarkR My guess it's not the iptables config. When I umount/mount the share, the connection will be establshed properly. After a while (don't know when), some connections break and get stuck in SYN_SENT.
21:40 Remco If possible you should upgrade to 3.4.1
21:43 zerick joined #gluster
21:44 MarkR Is 3.4.1 stable for production enviroment, 3.3.2 seems to be the last GA release.
21:45 Remco ,,(ppa)
21:45 glusterbot The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
21:46 Remco It has lots of fixes
21:46 Remco How stable it is I can't tell you, but it probably has less issues than lower versions
21:46 diegows joined #gluster
21:47 skered- semiosis: mount -t nfs -o tcp,vers=3 server:/export/path /mount/point
21:48 skered- ..-o nfsv3,proto=tcp.. as well
22:04 cfeller joined #gluster
22:05 kPb_in_ joined #gluster
22:21 skered- Do you think it would be better to file a bug directly via bugzilla.redhat.com or since we're a Redhat Customer file it through Redhat's site?  I'm not using Redhat Storage here, I don't know if that matters.
22:21 glusterbot http://goo.gl/UUuCq
22:28 diegows joined #gluster
22:44 failshel_ joined #gluster
22:45 bennyturns skered-, open a support case is the best idea
22:46 bennyturns skered-, you can file the BZ too, just mention the BZ in the case so they can attach it
22:56 jskinner joined #gluster
23:01 tg2 joined #gluster
23:04 jag3773 joined #gluster
23:10 nasso joined #gluster
23:51 tg2 joined #gluster
23:54 recidive joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary