Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 raven-np joined #gluster
00:18 Humble joined #gluster
00:34 johndescs_ joined #gluster
00:39 bala joined #gluster
01:09 Humble joined #gluster
01:27 Humble joined #gluster
01:33 kevein joined #gluster
01:35 ehg joined #gluster
01:59 Humble joined #gluster
02:24 raven-np joined #gluster
02:40 Humble joined #gluster
02:44 lpabon joined #gluster
02:52 vshankar joined #gluster
02:58 Humble joined #gluster
03:07 pipopopo joined #gluster
03:27 shylesh joined #gluster
03:30 lpabon joined #gluster
03:33 Humble joined #gluster
03:36 anmol joined #gluster
03:38 vpshastry joined #gluster
03:38 vpshastry left #gluster
03:46 an joined #gluster
03:50 mohankumar joined #gluster
03:51 sgowda joined #gluster
04:01 Humble joined #gluster
04:04 satheesh joined #gluster
04:05 bala joined #gluster
04:34 bulde joined #gluster
04:36 mshadle joined #gluster
04:37 sripathi joined #gluster
04:37 mshadle if i had a mirrored volume (on nas01 and nas02) and nas01 died, i reformatted it, set it up, how can i re-attach and basically resume operations?
04:38 Humble joined #gluster
04:38 vpshastry joined #gluster
04:40 lala joined #gluster
04:48 hagarth joined #gluster
04:55 bharata joined #gluster
05:03 layer7switch joined #gluster
05:08 layer7switch joined #gluster
05:09 bala joined #gluster
05:13 an joined #gluster
05:15 deepakcs joined #gluster
05:16 ainur-russia joined #gluster
05:44 raghu joined #gluster
05:48 sahina joined #gluster
05:49 sripathi joined #gluster
05:56 _pol joined #gluster
06:07 ainur-russia joined #gluster
06:11 Humble joined #gluster
06:13 timothy joined #gluster
06:14 ainur-russia joined #gluster
06:25 sripathi1 joined #gluster
06:26 rotbeard joined #gluster
06:31 sripathi joined #gluster
06:34 ainur-russia joined #gluster
06:41 Humble joined #gluster
06:57 joeto joined #gluster
06:58 vimal joined #gluster
07:01 satheesh joined #gluster
07:13 jtux joined #gluster
07:22 rgustafs joined #gluster
07:29 rastar joined #gluster
07:34 aravindavk joined #gluster
07:37 ramkrsna joined #gluster
07:38 ctria joined #gluster
07:41 ninkotech_ joined #gluster
07:44 Humble joined #gluster
07:47 Nevan joined #gluster
07:49 ramkrsna joined #gluster
07:51 17WAA70PK joined #gluster
07:51 nixpanic joined #gluster
07:51 jds2001 joined #gluster
08:00 rgustafs joined #gluster
08:02 jtux joined #gluster
08:02 ngoswami joined #gluster
08:03 guigui joined #gluster
08:11 hybrid512 joined #gluster
08:12 rgustafs joined #gluster
08:15 andreask joined #gluster
08:20 glusterbot New news from newglusterbugs: [Bug 882127] The python binary should be able to be overridden in gsyncd <http://goo.gl/cnTha>
08:25 srhudli joined #gluster
08:30 ThatGraemeGuy joined #gluster
08:33 Staples84 joined #gluster
08:33 pithagorians joined #gluster
08:34 vpshastry left #gluster
08:37 johndescs left #gluster
08:37 vpshastry joined #gluster
08:40 hagarth joined #gluster
08:43 tjikkun_work joined #gluster
08:45 dobber_ joined #gluster
08:46 pithagorians hi all. i have a setup of 2 servers on debian 6, glusterfs 3.3.1-1 on servers and clients. i got a folder viewed from client with question marks (?) for every property but name.
08:47 pithagorians here is my configs for 2 different clients and snippet of log file from one server http://paste.ubuntu.com/5567108/
08:47 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
08:48 pithagorians as you see, i have storage2:/file-storage and storage1:/file-storage in mounts
08:48 pithagorians question is: are these 2 things related to each other ?
08:49 pithagorians is it possible to get a "split brain" after some longer period of time /
08:49 pithagorians ?
08:49 pithagorians how to avoid such situations ?
08:51 Humble joined #gluster
08:52 tryggvil joined #gluster
08:52 gbrand_ joined #gluster
09:02 ramkrsna joined #gluster
09:15 ekuric joined #gluster
09:20 glusterbot New news from newglusterbugs: [Bug 915643] Improve debuggability of afr transaction <http://goo.gl/nDv95>
09:30 yinyin joined #gluster
09:32 jiffe2 joined #gluster
09:42 AndroUser2 joined #gluster
09:50 glusterbot New news from newglusterbugs: [Bug 826021] Geo-rep ip based access control is broken. <http://goo.gl/jsj1f>
09:51 jiffe1 joined #gluster
09:58 cw joined #gluster
10:05 hagarth joined #gluster
10:15 cyberbootje joined #gluster
10:32 manik joined #gluster
10:45 clag_ joined #gluster
10:49 raven-np joined #gluster
10:50 Staples84 joined #gluster
10:56 an joined #gluster
11:07 an joined #gluster
11:16 Era joined #gluster
11:16 Era Hello !
11:18 Era I have a question , can we mount a part of a volume point like a nfs ?
11:19 Norky a subdirectory within a volume?
11:19 Norky not via the native glusterfs protocol, I believe
11:19 Norky however it should be possible via NFS
11:20 Era hmmm like I have a bluster Volume "volume1" in that I have volume1/var/www/site.com I want to mount only the /var/www/site.com
11:25 ThatGraemeGuy if it's on linux, you could use a bind mount: 'mount --bind /home/me /myhomedir' makes /home/me available through /myhomedir
11:26 purpleidea joined #gluster
11:26 purpleidea joined #gluster
11:28 andreask joined #gluster
11:44 sahina joined #gluster
11:48 Era Norky, Via nfs ?you mean adding a nfs client over a glusterfs ?
11:49 Norky Era, gluster includes NFS service
11:50 Norky it's only NFSv3, not v4
11:50 Norky see what happens when you type "gluster volume status VOLNAME"
11:52 Era HA thanks didn't know that
11:53 edward1 joined #gluster
11:59 rgustafs joined #gluster
11:59 jdarcy joined #gluster
12:01 ctrianta joined #gluster
12:03 raven-np joined #gluster
12:11 EJ joined #gluster
12:25 Era Norky, there ? If i try to mount with nfs it say "failed, reason given by server:
12:25 Era No such file or directory" any suggestion ?
12:26 an joined #gluster
12:37 ThatGraemeGuy joined #gluster
12:39 glusterbot New news from resolvedglusterbugs: [Bug 811672] mountbroker initiated umounts fail with EACCES on RHEL systems. <http://goo.gl/bmyrd>
12:47 theron joined #gluster
12:53 raven-np joined #gluster
12:56 mohankumar joined #gluster
13:02 pithagorians hi all. i have a setup of 2 servers on debian 6, glusterfs 3.3.1-1 on servers and clients. i got a folder viewed from client with question marks (?) for every property but name. here are my configs for 2 different clients and snippet of log file from one server http://paste.ubuntu.com/5567108/
13:02 pithagorians as you see, i have storage2:/file-storage and storage1:/file-storage in mounts
13:02 pithagorians question is: are these 2 things related to each other ?
13:02 pithagorians is it possible to get a "split brain" after some longer period of time ?
13:02 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:02 pithagorians how to avoid such situations ?
13:07 plarsen joined #gluster
13:17 jdarcy joined #gluster
13:17 manik joined #gluster
13:24 jdarcy joined #gluster
13:34 ctrianta joined #gluster
13:54 manik joined #gluster
14:02 dustint joined #gluster
14:08 rwheeler joined #gluster
14:09 jdarcy_ joined #gluster
14:10 nueces joined #gluster
14:13 mohankumar joined #gluster
14:16 larsks joined #gluster
14:27 vshankar joined #gluster
14:32 hagarth joined #gluster
14:43 Humble joined #gluster
14:46 raven-np joined #gluster
14:51 stopbit joined #gluster
14:53 vpshastry joined #gluster
14:54 lpabon joined #gluster
14:59 lala_ joined #gluster
15:11 aliguori joined #gluster
15:15 jdarcy joined #gluster
15:20 bugs_ joined #gluster
15:26 _pol joined #gluster
15:27 lpabon ZFS and Gluster: Interesting read:  http://pthree.org/2013/01/25/g​lusterfs-linked-list-topology/
15:27 glusterbot <http://goo.gl/0HHCK> (at pthree.org)
15:34 frakt Hi glusterers. I've got a problem: [2013-02-26 16:19:50] W [fuse-bridge.c:1219:fuse_unlink_cbk] glusterfs-fuse: 36641576: UNLINK() /path/to/file/on/mounted-gluster-volume/filename => -1 (No such file or directory)
15:35 frakt thats a log file from a client. I tried removing the file but rm never finishes
15:37 frakt I tried removing it from an other client but same symptom - rm hangs
15:38 manik joined #gluster
15:38 frakt (we're still running some really old glusterfs version)
15:43 Humble joined #gluster
15:47 qwerty123 joined #gluster
15:48 qwerty123 hi
15:48 glusterbot qwerty123: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:53 dustint joined #gluster
15:53 raven-np joined #gluster
15:56 qwerty123 looking for some advice how to use gluster securely over internet
15:56 qwerty123 my understanding is default gluster connections are not encrypted?
15:57 jdarcy True.  However, we do have SSL support.
15:58 qwerty123 could you point me to some docs?
16:00 lpabon joined #gluster
16:00 jdarcy Argh.  Looks like the docs are still work-in-progress.  Just a sec.
16:00 qwerty123 ;)
16:00 qwerty123 looking at some slides now http://www.slideshare.net/johnmarkorg/the-f​uture-of-glusterfs-and-glusterorg-11280435
16:00 glusterbot <http://goo.gl/6Csn1> (at www.slideshare.net)
16:03 Guest81074 left #gluster
16:03 bala joined #gluster
16:04 jdarcy http://git.fedorahosted.org/cgi​t/CloudFS.git/tree/doc/hekafs.8 (needs to be run through man/nroff) has some info from HekaFS.
16:04 glusterbot <http://goo.gl/cSijB> (at git.fedorahosted.org)
16:04 qwerty123 thanks
16:04 jdarcy https://github.com/gluster/glusterfs​/blob/master/tests/bugs/bug-873367.t is an example of setting up the key/cert/CA from a script
16:04 glusterbot <http://goo.gl/CWVsO> (at github.com)
16:05 johnmark greetings from die Schweiz :)
16:05 jdarcy It's definitely a bit fludgy at present.  Sorry about that.
16:05 jdarcy johnmark: Hey, how is/was CERN?
16:05 qwerty123 no worries - I am considering using HDFS but my friend prefers GlusterFS
16:05 qwerty123 looking what the latter is ;)
16:06 johnmark jdarcy: going well :)  about to wrap up
16:06 jdarcy qwerty123: Might also want to check out XtreemFS if you're interested in a scale-out filesystem that does security decently well.
16:07 qwerty123 well - I am thinking about using HDFS or any distributed system just for backups to distribute them over several physical locations
16:08 vpshastry joined #gluster
16:08 qwerty123 I can encrypt data first and then write it, but would prefer to have some encryption over transit as well
16:08 qwerty123 jdarcy: will look at XtremeFS too
16:09 qwerty123 If I can enable SSL in GlusterFS this may be viable solution
16:09 qwerty123 too
16:10 qwerty123 What does that mean 'GlusterFS 3.3 has Hadoop/HDFS compatibility'?
16:11 jdarcy It means that we've implemented the Hadoop Filesystem API (class) to handle queries about data location.
16:11 Humble joined #gluster
16:11 jdarcy So the Hadoop job tracker can put jobs where their data is.
16:12 qwerty123 cool
16:12 daMaestro joined #gluster
16:12 qwerty123 So GlusterFS ca nbe used as FS for Hadoop?
16:12 qwerty123 was not aware of that
16:13 jdarcy Oh, we've just begun on that front.
16:14 qwerty123 hehe, nice
16:17 rotbeard hi there, I tried to upgrade my 3.0 to 3.2 (debian stable > debian backports). for whatever reason, I finally got a split brain. my plan is: leave 1 node alone as the actual live maschine and build a new gluster neighborhood with node2 + a temporary system. if the files from node1 are synced to that 'new' glusterfs share, I want to clear node1 and put it into the node2 neighborhood with replace-brick. good idea?
16:17 jdarcy HDFS is an idiot savant - it does one thing well, and everything else poorly.  With us and other people who actually know storage working on this, a year from now that one last excuse for using HDFS will be gone.
16:18 qwerty123 haha
16:18 qwerty123 looking at XtremeFS now. What is comparison between XtremeFS and GlusterFS?
16:19 jdarcy rotbeard: Sounds like a pretty common redeployment technique to me.
16:19 jdarcy http://hekafs.org/index.php/20​11/08/quick-look-at-xtreemfs/
16:19 glusterbot <http://goo.gl/aWGYc> (at hekafs.org)
16:20 rotbeard jdarcy, thanks. I am very new to glusterfs and that replicated storage stuff. I use drbd on some other systems but I read and heard a lot about glusterfs and it looks pretty nice for my job
16:20 manik joined #gluster
16:21 jdarcy Short version: XtreemFS doesn't have the breadth of features or performance options that GlusterFS does, but they have some features we don't and the implementation seems more than decent.
16:22 qwerty123 cool
16:22 qwerty123 myself I do not think I care that much about performance right now
16:22 qwerty123 so maybe I will need to read about xtremefs
16:25 qwerty123 jdarcy: Are you Gluster developer?
16:27 jdarcy qwerty123: Yes.
16:32 pithagorians hi all. i have a setup of 2 servers on debian 6, glusterfs 3.3.1-1 on servers and clients. i got a folder viewed from client with question marks (?) for every property but name. here are my configs for 2 different clients and snippet of log file from one server http://paste.ubuntu.com/5567108/
16:32 pithagorians as you see, i have storage2:/file-storage and storage1:/file-storage in mounts
16:32 pithagorians question is: are these 2 things related to each other ?
16:32 pithagorians is it possible to get a "split brain" after some longer period of time ?
16:32 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
16:32 pithagorians how to avoid such situations ?\
16:35 eightyeight i have geo-replication working (supposedly), but it appears to be missing a lot of files that my client mount on the peers has. i thought the replication was automatic? is there something i need to do to trigger replicating the missing files?
16:36 jdarcy pithagorians: It shouldn't matter which machine you mount from.  Once the client has fetched the config, it'll connect to all of the bricks regardless.
16:37 hagarth joined #gluster
16:37 jdarcy pithagorians: The real question is whether this is a distributed or replicated volume, and whether the clients can actually *see* all of the bricks.
16:37 jdarcy pithagorians: Split brain happens when one client can only write to one member of a replica pair, and another client can only write to the other replica.
16:40 pithagorians <jdarcy> it is replicated
16:41 pithagorians the clients can see the brick
16:41 pithagorians but the js folder inside viewed by client has question marks and can't be accessed
16:41 pithagorians no operation can be done on it from client
16:42 rwheeler joined #gluster
16:43 Humble joined #gluster
16:44 jdarcy pithagorians: Anything in the client logs about either disconnections or split brain?
16:49 pithagorians http://paste.ubuntu.com/5568085/ this is what i see there
16:49 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
16:51 glusterbot New news from newglusterbugs: [Bug 895528] 3.4 Alpha Tracker <http://goo.gl/hZmy9>
16:52 zaitcev joined #gluster
16:53 pithagorians the same on other clients
16:53 Norky joined #gluster
16:59 pithagorians and also http://paste.ubuntu.com/5568121/
16:59 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
17:03 eightyeight how can i force missing data to get geo-replicated?
17:03 ramkrsna joined #gluster
17:05 timothy joined #gluster
17:06 x4rlos Gluster when trying to remove a brick from the gluster-cli seems to fall over on wheezy / 3.3.1-2 :: http://pastebin.com/Jf1K2aK5
17:06 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
17:09 x4rlos this has worked before (though this is my first time using the cli).
17:11 Humble joined #gluster
17:13 x4rlos hmmm. Ignore me for now. It may not be that simple.
17:14 qwerty123 jdarcy: is it possible to configure GlusterFS/XtremeFS to work fine even if just one node is available but also when other nodes are available to start replicating documents which were saved during the time when only one node was up?
17:16 lala joined #gluster
17:20 x4rlos am i missing a trick? Stopping the gluster server via init.d seems to just stop the one process, and leave the rest running... http://pastebin.com/vakbri5L
17:20 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
17:25 disarone joined #gluster
17:27 qwerty123 I guess my requirements are two:
17:27 qwerty123 - distributed FS which supports SSL or some other kind of encryption
17:28 _Bryan_ So if I am getting a bunch of "no subvolume for hash (value)" what is the best way to clean these up?
17:28 qwerty123 - ability to specify min_no_of_replicas = 1; and desirable_no_of_replicas == 3 (or whatever)
17:29 qwerty123 and I'd like such a FS to accept writes if min_no_of_replicas < current_no_of_replicas < desirable_no_of_replicas
17:30 qwerty123 but also to try to replicate data until it is propagated onto desirable_no_of_replicas
17:30 qwerty123 ;)
17:35 Humble joined #gluster
17:44 wN joined #gluster
17:48 rwheeler_ joined #gluster
17:56 _pol joined #gluster
17:57 _pol joined #gluster
18:04 portante joined #gluster
18:05 y4m4 joined #gluster
18:06 y4m4 joined #gluster
18:07 Humble joined #gluster
18:07 Mo___ joined #gluster
18:18 _pol joined #gluster
18:18 _pol joined #gluster
18:29 Humble joined #gluster
18:49 nueces joined #gluster
19:05 ctrianta joined #gluster
19:07 plarsen joined #gluster
19:10 glusterbot New news from resolvedglusterbugs: [Bug 788696] Gluster CLI cmds "operation failed" when a node off network until glusterd svc is restarted. <http://goo.gl/xUF18>
19:11 Era joined #gluster
19:14 puebele joined #gluster
19:16 semiosis JoeJulian: ping
19:30 hagarth joined #gluster
19:43 pithagorians joined #gluster
19:49 pithagorians anybody who knows solutions for http://paste.ubuntu.com/5568085/ ?
19:49 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
19:55 Humble joined #gluster
19:55 hattenator joined #gluster
19:55 semiosis pithagorians: what version of glusterfs?
19:58 pithagorians <semiosis> 3.3.1-1
19:58 pithagorians on both client and server
19:58 pithagorians on both client and servers
19:58 pithagorians on both clients and servers
19:59 _pol joined #gluster
20:04 semiosis pithagorians: have you written data directly to the backend brick filesystems?
20:04 _pol joined #gluster
20:04 semiosis pithagorians: or did you write all data into the volume through a client mount point like you're supposed to?
20:04 _pol joined #gluster
20:06 lpabon joined #gluster
20:06 * tqrst wonders why gluster even allows bricks to be writable by anything but the cluster daemons
20:06 tqrst s/cluster/gluster
20:07 lpabon joined #gluster
20:08 pithagorians we wrote only through client mount
20:09 semiosis tqrst: well it shouldn't be allowed by policy, but how would you propose enforcing that policy?
20:10 mattr01 joined #gluster
20:12 semiosis pithagorians: possible you have ,,(split-brain)?
20:12 glusterbot pithagorians: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
20:12 semiosis do the things in #1 sound familiar?
20:13 Humble joined #gluster
20:14 tqrst semiosis: I'm not sure to be honest. Permissions are already used up for the actual permissions (maybe store the original permissions in xattr?). ACLs aren't recursive as far as I can remember.
20:14 pithagorians The former, a network partition, can cause a split brain if two servers are also clients, and both have applications writing to the same file. - this is our case
20:15 pithagorians the servers are clients at the same time
20:15 semiosis pithagorians: and you have apps on both of those machines writing into the same file
20:15 semiosis pithagorians: that's a bummer
20:15 pithagorians and there are writes and reads to/from that mounted partitions
20:16 semiosis i would strongly recommend turning on quorum after fixing the split brain
20:16 semiosis see link #2 in glusterbot's message for info about healing split-brain
20:17 semiosis with quorum enabled the clients will turn read-only when they can't reach a majority of replicas.  this works best with replica >= 3, but iirc it does work with replica 2 as well
20:18 pithagorians we have only 2 servers available for storage so replica is 2
20:18 pithagorians explain me please what iirc is
20:18 semiosis s/iirc/if i remember correctly/
20:18 glusterbot What semiosis meant to say was: with quorum enabled the clients will turn read-only when they can't reach a majority of replicas.  this works best with replica >= 3, but if i remember correctly it does work with replica 2 as well
20:20 pithagorians thanks
20:20 semiosis you're welcome
20:27 pithagorians http://paste.ubuntu.com/5568672/    - it looks like not the split brain is the issue
20:27 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
20:29 mshadle i have node1 and node2, node2 has all the files in /exports/foo, and i have /exports/foo on node1, but gluster is not sending the files over to that node. i have restarted the service, stopped/started the volume, etc. was able to successfully get the two nodes to sync properly on another volume. how can i force them to start syncing again? volume status shows it all looking okay.. but i get no data.
20:35 semiosis mshadle: what version of glusterfs?
20:37 andreask joined #gluster
20:46 mshadle semiosis: 3.3.0
20:46 mshadle i see this:
20:46 mshadle [2013-02-26 12:41:40.592577] I [server3_1-fops.c:252:server_inodelk_cbk] 0-idz-mnt-server: 10: INODELK (null) (--) ==> -1 (No such file or directory)
20:46 mshadle on the server that is lacking the files
20:48 semiosis mshadle: have you been writing directly to the backend brick filesystem?
20:48 jag3773 joined #gluster
20:48 mshadle one folder quick and removed it, but that was just recently. havent otherwise no
20:49 mshadle wanted to see if even that would propogate
20:49 Humble joined #gluster
20:49 joe- joined #gluster
20:49 mshadle is this considered a self-heal?
20:49 semiosis mshadle: so you've been writing files through a client mount point and they are not showing up on one of your replicas?
20:49 mshadle or what. i want to force replication, and a volume sync doesn't seem to be what i want, says i have to delete the volumes before
20:49 mshadle correct
20:50 mshadle actually, node1 died, so i had to re-install ubuntu, i got it up and going, and got 1 of the 2 volumes to propogate now
20:50 mshadle the second volume though, don't know why i can't get it going
20:50 semiosis mshadle: well sounds like your client is not connected to that brick, which you should fix by unmounting/remounting the client
20:50 semiosis mshadle: there was a bug in 3.3.0 with clients sometimes not reconnecting to bricks, you should upgrade to 3.3.1
20:51 mshadle is it a transparent upgrade?
20:51 mshadle with a PPA or something i can use
20:52 semiosis mshadle: well there is a ,,(ppa) but idk how seamless your upgrade will be
20:52 glusterbot mshadle: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
20:52 semiosis mshadle: anyway, to fix your current problem... 1) unmount/remount the client, watch its log file to make sure it connects to both bricks (or fix any problems preventing that)
20:53 semiosis 2) use 'gluster volume heal <volname> full' to force a self-heal on all files in the volume
20:53 mshadle ok. i have 4 webservers that connect to 2 backend gluster servers, and they use the fuse mount
20:55 _pol joined #gluster
20:56 _pol joined #gluster
20:56 pithagorians <semiosis> any solution for my problem? it happened now with a folder with no important data. but if it will happen again with other folder it will be bad.
20:57 semiosis pithagorians: quorum
20:58 pithagorians even if heal info split-brain shows nothing ?
20:58 pithagorians similar to #2
21:01 pithagorians from mounted  client i see the folder with question marks and the folder is different size and content files on both nodes when i look directly
21:03 pithagorians in clients logs is specified the stale handle file error for this folder
21:08 mshadle well that broke my setup, heh
21:09 mshadle does glusterfs-server package rely on /etc/glusterfs/glusterfs*.vol files still? i thoughtt that was the deprecated way.
21:10 _pol_ joined #gluster
21:11 lh joined #gluster
21:15 mshadle semiosis: http://pastebin.com/0SyFgwrj - i did an unmount on one client and a remount, and this too
21:15 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
21:17 lpabon joined #gluster
21:17 mshadle it's saying the brick is not connected..
21:23 lpabon_ joined #gluster
21:23 mshadle afr-self-heal-common.c:2156:a​fr_self_heal_completion_cbk] 0-idz-mnt-replicate-0: background  meta-data data entry self-heal failed on <gfid:0b5d180c-d1e8-4625-8f05-417fe6562553>
21:23 mshadle :/
21:34 fidevo joined #gluster
21:42 Humble joined #gluster
21:46 semiosis mshadle: brick not connected?!  could you please pastie the output of 'gluster volume status idz-mnt'
21:47 semiosis mshadle: volfiles are still used internally by glusterfs, only editing them by hand is deprecated.
21:49 cyberbootje hi, anyone here that can tell me how to get nfs working?
21:49 semiosis ~nfs | cyberbootje
21:49 glusterbot cyberbootje: To mount via nfs, most distros require the options, tcp,vers=3 -- Also portmapper should be running on the server, and the kernel nfs server (nfsd) should be disabled
21:49 BSTR Hey guys, im trying to mount a .vol file off a specific host via glusterfs -- does the following look OK? glusterfs --volfile-id=6efb2c1c-a3bf-4591-8765-3fc6266c5d03 --volfile-server=node2 --volfile-max-fetch-attempts=5 --read-only /mnt/glust
21:49 semiosis cyberbootje: welcome back
21:50 cyberbootje semiosis: thx, well never been away actually, still testing glusterFS a lot
21:50 semiosis BSTR: that looks very unusual.  what version of glusterfs are you using?
21:51 cyberbootje semiosis: i need to mount windows server 2008 R2 to a gluster volume, i only get error: 53 not found
21:51 cyberbootje maby i forgot to do something on gluster?
21:51 BSTR doesn't seem to bring up mount point, and when i manually bring it up with mount -t glusterfs node2:glust-fs /mnt/glust/ it comes up but not in read only...
21:52 semiosis BSTR: mount -t glusterfs node2:glust-fs /mnt/glust/ -o ro
21:52 BSTR semiosis : im using 3.3.1-9
21:52 semiosis BSTR: you need the '-o ro' at the end
21:53 semiosis cyberbootje: try mounting with a linux nfs client to make sure the gluster nfs server is working.  it is on by default, and you should be able to connect if you follow glusterbot's ,,(nfs) tips
21:53 glusterbot cyberbootje: To mount via nfs, most distros require the options, tcp,vers=3 -- Also portmapper should be running on the server, and the kernel nfs server (nfsd) should be disabled
21:53 BSTR semiosis : essentially im trying to dictate which host to mount the .vol files from (i.e. the closest regional host), and fail over the next in line if that host were to fail
21:54 semiosis BSTR: well you're asking about mounting read-only
21:55 semiosis BSTR: we usually recommend DNS solutions to improve availability of the ,,(mount server)
21:55 glusterbot BSTR: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
21:55 semiosis BSTR: such as ,,(rr-dns)
21:55 glusterbot BSTR: I do not know about 'rr-dns', but I do know about these similar topics: 'rrdns'
21:55 semiosis ,,(rrdns)
21:55 glusterbot You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
21:55 semiosis but in your case some fancy geodns would be appropriate
21:55 semiosis route53 supports that iirc
21:56 cyberbootje running mount -t nfs -o vers=3 server:/volume /mnt/nfTEST works just fine
21:56 semiosis cyberbootje: good
21:57 cyberbootje for some reason windows needs to bee crappy again..
21:57 cyberbootje be*
22:00 _pol joined #gluster
22:02 Humble joined #gluster
22:03 BSTR semiosis : i was speaking with someone in hear yesterday afternoon about this. Basically what im trying to accomplish is having (2) masters in US -> 2 regional Slaves in Europe -> 2 data center slaves in London (for example) && only allow the nodes to read from the closest node -- i.e. client -> london -> europe -> us masters. Read-only data is required. Is this possible?
22:03 BSTR here*
22:05 BSTR ideally i would like to be able to set a priority or cost on the nodes or slaves to indicate which one to read from
22:05 cw joined #gluster
22:08 semiosis BSTR: not really although you may be able to get something acceptable with geo-replication
22:08 cyberbootje anyone here that has experience with windows nfs as a client?
22:08 wN joined #gluster
22:09 joehoyle joined #gluster
22:09 foster joined #gluster
22:11 BSTR semiosis : Yea, i spent a decent amount of time yesterday setting that up (cascading geo-rep) to mimic my planned topology, but i ran into the same issue where the client would basically read data from a random master or slave , not neccessarily the closest slave
22:12 semiosis ~glossary | BSTR
22:12 glusterbot BSTR: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
22:12 JoeJulian It's not random, btw...
22:14 mshadle looks like i got the two volumes in sync now, more or less. there's a slight file count difference though. i did the stat stuff on a client to force stuff
22:14 semiosis mshadle: heal may still be in progress
22:14 mshadle k
22:14 JoeJulian I've been thinking about the whole rrdns thing and how much I don't like the current rrdns implementation and have been thinking of proposing a srv record protocol for doing it better. I'm still working on the rfc, but adding weights for doing what BSTR is asking might not be a bad addition.
22:15 semiosis JoeJulian: well BSTR wants a different answer based on location of the requester, i think
22:17 JoeJulian True, and that can be implemented using split-horizon dns if there was a way of doing weighted responses.
22:18 BSTR correct, im trying to push this for roughly 40 data centers across the globe, so ideally i would like to be able to set a weight on nodes per data center / region
22:18 JoeJulian Today there's no built-in way to do that.
22:20 JoeJulian You could do your own scripts that pull hostnames from your own custom srv records or other data source, and manage mounts that way. If the volume goes away your script would be responsible for unmounting and mounting up the chain.
22:20 pipopopo joined #gluster
22:23 Humble joined #gluster
22:25 _pol joined #gluster
22:26 _pol joined #gluster
22:29 cjohnston_work JoeJulian: just following the chatter here about geo-replication - so if I understand this correctly the clients that are local to the slave will still only attach to the master?
22:29 cjohnston_work bit will use the slaves IF the master goes down?
22:38 JoeJulian No....
22:38 cjohnston_work hmm so what is the purpose of the remote geo-replicated slave, just for failover?
22:38 JoeJulian geo-replication works like an enhanced rsync. It copies files from the master's volume to one or more slaves.
22:39 cjohnston_work yea I grokked some of the code, looks like its all python+rsync
22:39 cjohnston_work for the geo-rep bits
22:39 JoeJulian If the slave target is a volume (the slave is also a server) then the client /could/ mount that volume.
22:41 cjohnston_work so yesterday I tried to setup a master (brick) -> slave (with a brick) with the master repicating to the mounted brick on the slave and attempting to mount the remove client from the slave
22:41 Humble joined #gluster
22:42 JoeJulian So server1:/myvol1 geo-replicates to server2:/myremotevol1, a glusterfs volume, which geo-replicates to server3:/myfurtherremotevol1, also a gluster volume. A client could mount any of those, but only write to server1:/myvol1
22:42 JoeJulian You don't geo-replicate to a brick. That's "bad".
22:42 cjohnston_work okie, so I tried setting that up yesterday and I think I ran into a namespace clash
22:43 JoeJulian I'm using different volume names in my example just to clearly demonstrate the separation, but they could all have the same volume name.
22:43 cjohnston_work so I thought in order to have a gluster volume, it needs to be created off a brick (even if its a single node brick)
22:43 cjohnston_work perhaps I have my terms mixed up
22:44 mshadle left #gluster
22:44 JoeJulian Now for reading, what BSTR wants to do is mount server3. If server3 goes down, he wants the client to roll over to server2. if both are down he wants server1.
22:44 cjohnston_work yea BSTR works for me :-)
22:44 JoeJulian Ah, ok.
22:44 cjohnston_work im just jumping into this after he explained to me the problems he was seeing
22:45 JoeJulian @glossary
22:45 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
22:45 jdarcy joined #gluster
22:45 JoeJulian So a brick is a piece of a volume's storage. It's intended to be managed by the glusterfs server.
22:45 cjohnston_work we are also running RHEL6.2 storage in prod for other projects but this is another area we want to explore for readonly data for global client (4k nodes)
22:46 JoeJulian btw... if you're a Red Hat Storage customer, you should really have one of their engineers get you sussed. They're pretty good at it.
22:46 cjohnston_work we are doing this setup on fedora bleeding edge
22:47 JoeJulian Ok, not saying I wouldn't help, just that they're a valuable resource.
22:47 cjohnston_work unfortunately our support contract wont cover our fedora setup but we are poking at them for more information on their version
22:48 JoeJulian aaanyway... So what you wan't isn't directly supported. There's no way for the client to hierarchically re-mount servers up the geo-rep tree.
22:49 cjohnston_work so yes, what BSTR is asking we would want to automagically work (without having to replicating to another volume) but to have the client be aware of its closest slave to read from, and if that slave goes down, failover
22:49 JoeJulian You'd have to have some sort of script that would detect the lack of access and umount/mount it's way up the tree - or use a vip to make it work using nfs.
22:50 cjohnston_work but if we can currently do something today with what you are telling me with replicating master:/myvol -> slave:/myreplicatedvol that might buy us some time until something like this supported
22:51 JoeJulian The whole srv record concept I through out there occurred to me for completely unrelated reasons, but might be able to be utilized for what your asking as well.
22:52 cjohnston_work this concept of what we are asking for, is this something the gluster community would accept as a feature request? seems like a pretty reasonable thing to use otherwise I am not quite sure what the purpose of a geo-replicated slave is for other than for failover
22:52 JoeJulian If I were doing some read-only data that I wanted regional locality, I would probably go with nfs mounts using a floating ip. Using whatever vpn trickery it would require to make that happen.
22:53 JoeJulian Sure! file a bug report here:
22:53 glusterbot http://goo.gl/UUuCq
22:53 cjohnston_work yea - I mean we could use NFS in the remote region to mount up to its closest slave
22:53 jdarcy joined #gluster
22:53 JoeJulian I wonder if the kernel updates ips on nfs mounts if the dns record changed...
22:54 BSTR JoeJulian : we would just use a vip
22:55 JoeJulian Since it sounds like you're not using replicated volumes (which would mitigate the "my local server's down" problem), then there's no direct need for the fuse client.
22:55 cjohnston_work I see where you are going with this
22:56 JoeJulian The other option, of course, is that your regional volumes are replicated across multiple (2 or 3) servers so if a server's down nobody cares.
22:56 cjohnston_work so the furthest one replicated would just read from its closest slave (based off our fstab or automount etc)
22:56 JoeJulian correct
22:56 cjohnston_work but if it needed to write to it...?
22:56 JoeJulian Then it has to write to the top. The master.
22:56 cjohnston_work I thought we can only write to the master
22:56 cjohnston_work okie so this would be for purely readonly data
22:56 JoeJulian Yes.
22:58 cjohnston_work wish we had a virtual whiteboard here to show you but I think you get the idea
22:59 JoeJulian I think I do as well. I think it's a good topic for my next blog post (which won't happen until next week). I think it's also the first time I've considered making a video post.
22:59 cjohnston_work master brick (2 nodes) -> regional slave (2 nodes) -> colo slave (2 nodes)
22:59 cjohnston_work thats the sequence of the replication
22:59 cjohnston_work oh nice, where is your blog?
23:00 mshadle joined #gluster
23:00 JoeJulian http://joejulian.name
23:00 glusterbot Title: JoeJulian.name (at joejulian.name)
23:00 mshadle is the output from 'gluster volume heal <vol> info" the pending changes? one node has 0, the other has what looks like maybe some of those missing files /  not synced yet
23:01 JoeJulian yes
23:03 jag3773 joined #gluster
23:05 JoeJulian cjohnston_work: btw... two-way geo-replication is also on the drawing board. I think the intention is to get that into then next version after 3.4.
23:07 cjohnston_work JoeJulian thats great and I am pushing our RedHat reps about this topic as well
23:12 BSTR left #gluster
23:21 cyberbootje are there other methods besides nfs to mount to gluster?
23:21 cyberbootje on windows..
23:22 semiosis cyberbootje: run samba on a linux machine with a fuse client
23:22 mshadle a mount on top of a mount?
23:22 cyberbootje was kinda koping to avoid samba actually... there is no gluster client?
23:24 cyberbootje can't seem to get nfs working for some weird reason
23:27 Humble joined #gluster
23:30 BSTR joined #gluster
23:35 JoeJulian Nope, there's no FUSE in the Windows kernel.
23:36 Era joined #gluster
23:36 cyberbootje is there really no one who has this working with windows nfs?
23:37 JoeJulian Since someone wrote an ext2 driver for windows, I suppose it's possible, but I don't think we have anyone that's a windows kernel expert currently working on gluster.
23:38 cyberbootje i just have to get the mount working, nfs is fine just that it does not connect
23:39 JoeJulian which windows version?
23:40 cyberbootje 2008 R2
23:40 cyberbootje nfs role has been installed
23:40 JoeJulian My guess is that it's either not supporting v3 or it's not trying to connect over tcp. Have you checked with wireshark?
23:41 cyberbootje my guess is it is connecting trough smb even when i put nfs on priority
23:41 cyberbootje and i even think it is sonnecting trough udp, not tcp
23:43 hagarth joined #gluster
23:43 semiosis cyberbootje: tcpdump it
23:43 semiosis or wireshark on windows
23:48 cyberbootje port: 983
23:51 cyberbootje how do i get windows to use tcp
23:51 JoeJulian Call MS support. ;)
23:51 cyberbootje omg:-)
23:51 cyberbootje or maby setup gluster to use udp?
23:52 JoeJulian I tried googling for it, and there's nothing easily found.
23:52 JoeJulian And gluster cannot use anything but TCP, v3.
23:52 cyberbootje JoeJulian: i know, dod some googling too
23:52 cyberbootje did*
23:52 semiosis http://support.microsoft.com/kb/831908
23:52 glusterbot Title: Performance is slow if you use Client for NFS and UDP (at support.microsoft.com)
23:53 semiosis the MORE INFORMATION section looks helpful
23:53 glusterbot New news from newglusterbugs: [Bug 915996] Cascading Geo-Replication Weighted Routes <http://goo.gl/eyX0V>
23:53 semiosis "NFS client: Open the Client for NFS window in the Services for UNIX administration console, click the Performance tab, and then select TCP in the Transport protocol list."
23:54 JoeJulian Of course, because TCP is a PERFORMANCE option... morons...
23:55 JoeJulian Have I said how big of a fan I am of Microsoft?
23:55 elyograg proximity doesn't help, then? ;)
23:56 JoeJulian Makes it worse. I know some of them.
23:57 mshadle left #gluster
23:57 JoeJulian They're *still* running 10mbit on hubs in some of their offices.
23:58 cyberbootje JoeJulian: don't get me started on that whole M$ bullsh*t...
23:58 cyberbootje only reason is that a customer really really really wants it :-(
23:59 elyograg windows chatty nature is probably just about enough to saturate 10mb :)
23:59 JoeJulian Especially when it's not switched.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary