Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-02-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 georgeh|workstat joined #gluster
00:06 dbruhn joined #gluster
00:17 badone__ joined #gluster
00:21 dbruhn joined #gluster
00:28 mattappe_ joined #gluster
00:37 badone__ joined #gluster
00:42 chirino joined #gluster
00:46 LessSeen joined #gluster
00:49 overclk joined #gluster
01:02 mattappe_ joined #gluster
01:02 avalys_ joined #gluster
01:07 avalys_ hi folks.  Let's say I have a 6-brick gluster volume initialized with "replicate 2".  Each brick is on a separate server.  If those servers each have the gluster volume mounted locally, and write to it, is there any preference for writing one of the replicated copies to their local brick, or will gluster choose from all the available bricks equally?
01:09 elyograg avalys_: the gluster client writes to all replicas at the same time.
01:09 tokik joined #gluster
01:10 avalys_ right, but with "replicate 2", it is only going to write to 2 bricks, and it could potentially choose the local brick plus one remote copy
01:10 avalys_ right?
01:10 avalys_ I'm just wondering if there is any way to set it to "prefer" the local brick
01:11 avalys_ for one of the replicated copies
01:11 badone__ joined #gluster
01:11 elyograg with replica 2 there will always be exactly two replicas.  It will write to the local brick and the remote brick on all requests, simultaneously, and I'm pretty sure that it won't return from the write until both writes say they are done.
01:12 bala joined #gluster
01:14 avalys_ yeah, I understand.  I'm just wondering if it might choose 2 remote bricks instead of 1 local + 1 remote.  would be a factor of 2 increase in network traffic
01:14 dbruhn avalys_, the distributed hash table actually determines which pair of bricks contain the data based off the hash of the file name
01:15 avalys_ dbruhn: oh, so there is no preference for local storage
01:15 dbruhn no, it would defeat how gluster works
01:16 dbruhn gluster hashes the name of the file
01:16 dbruhn that numerical representation of the file gets evenly distributed across the nodes base on I think the first two characters, or.... I can't remember how many
01:17 dbruhn the files then are places on the bricks based on that division
01:17 dbruhn makes lookups much faster
01:18 avalys_ dbruhn: ok, makes sense.
01:19 mattappe_ joined #gluster
01:19 avalys_ unrelated question: is it normal for the "gluster volume create", "gluster volume start", etc. commands to take a while to execute (e.g. 1 minute) on unloaded systems connected by a fast local network?
01:20 avalys_ or is there some kind of DNS resolution going on that is timing out?
01:21 dbruhn hmm, shouldn't take that long
01:21 dbruhn I mean a couple - 10 seconds maybe
01:21 elyograg 1 minute for that seems excessive.  I would check for some kind of a low-level problem.
01:22 dbruhn what distro?
01:22 avalys_ ubuntu 12.04
01:22 dbruhn iptables?
01:22 dbruhn and have you checked the /var/log/glusterd/cli log?
01:22 dbruhn or the other logs to see whats up
01:22 dbruhn actually are any of the commands providing feedback?
01:22 avalys_ no iptables or anything
01:23 avalys_ well, they all work without reporting any errors
01:23 avalys_ they just take a while
01:23 dbruhn that's odd
01:24 avalys_ didn't notice anything wrong with the filesystem performance, just the setup commands.  I'll check out the logs.
01:25 dbruhn_ joined #gluster
01:26 mattapperson joined #gluster
01:26 shyam joined #gluster
01:28 avalys_ thanks for the help!
01:34 dbruhn joined #gluster
01:38 itisravi joined #gluster
01:57 durzo joined #gluster
01:58 durzo hey guys, gluster has been filling up brick log file over the last 24 hours up to 10GB worth of this: http://pastebin.com/raw.php?i=m1dsVUvP every line is a different filename... any ideas?
01:58 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
01:59 durzo pastebin raw has 0 ads btw
02:01 Cenbe joined #gluster
02:02 dbruhn are all the file system's your bricks are on actually mounted? and are the files there
02:06 dbruhn durzo, ?
02:06 durzo dbruhn, yes
02:06 durzo sorry, yes they are all mounted and the volume is functioning well
02:06 durzo self heal reports no failures, no split brain
02:07 dbruhn can you use fpaste.org and show "gluster volume status"
02:07 durzo yes
02:08 dbruhn paste the link once you have it
02:09 durzo http://ur1.ca/gkvzn
02:09 glusterbot Title: #75172 Fedora Project Pastebin (at ur1.ca)
02:09 durzo had to obfuscate the servername as it exposes client..
02:09 dbruhn no worries
02:09 dbruhn what version of gluster?
02:10 durzo 3.3.2-ubuntu1~precise1
02:10 durzo from semiosis ubuntu ppa
02:10 dbruhn yep
02:11 dbruhn what about df?
02:12 durzo df shows /export/sitedata mounted from /dev/md0 (raid 0 JBOD in amazon AWS)
02:12 durzo on both servers
02:12 durzo do you need a paste?
02:12 dbruhn nah
02:13 dbruhn it seems like a bunch of your files are missing on that one brick
02:13 durzo thats what I thought, but self heal indicates to the contrary
02:14 durzo would a reboot of the node be any good?
02:14 dbruhn the files in .glusterfs are links check to see if the files it's linking to are good
02:14 dbruhn I can't imagine it would hurt...
02:14 durzo the files dont exist on either datastore
02:14 durzo have checked a few of them
02:15 dbruhn hmm, are the links on both servers?
02:15 durzo the /export/sitedata/.glusterfs/* files referenced in the log dont exist anywhere
02:17 dbruhn sec
02:17 mattappe_ joined #gluster
02:17 Cenbe joined #gluster
02:18 dbruhn the logs are indicating it can't find the file linking, so anything in the logs are going to be gone, because the log cleaned up the link
02:20 bala joined #gluster
02:21 durzo dbruhn, 10GB in 24 hours of that so far.. what could cause that?? to the best of my knowledge there is not many files being created/delete there, infact any files created should remain
02:21 durzo the entire gluster store is only 3GB
02:22 dbruhn what log was that error from?
02:23 durzo . /var/log/glusterfs/bricks/export-sitedata.log
02:24 dbruhn anything in the other logs?
02:24 dbruhn The gluster logs get extremely chatty when there is something up
02:24 dbruhn I have had logs consume a 50GB partition in less than a day
02:24 dbruhn frustrating to say the leads
02:24 dbruhn least
02:25 durzo glusterfs/glustershd.log is full of 'W [client3_1-fops.c:647:client3_1_unlink_cbk] 0-ds0-client-0: remote operation failed: No such file or directory'
02:26 durzo everything else looks normal
02:26 harish joined #gluster
02:37 shyam joined #gluster
02:37 sarkis joined #gluster
02:41 bala joined #gluster
02:44 mattappe_ joined #gluster
02:57 ira joined #gluster
02:59 chirino joined #gluster
03:07 bharata-rao joined #gluster
03:10 jag3773 joined #gluster
03:29 shubhendu joined #gluster
03:49 kanagaraj joined #gluster
03:49 RameshN joined #gluster
03:57 davinder joined #gluster
03:57 shyam joined #gluster
04:01 itisravi joined #gluster
04:10 dbruhn joined #gluster
04:12 CheRi joined #gluster
04:16 shylesh joined #gluster
04:21 shyam joined #gluster
04:25 davinder joined #gluster
04:26 tokik_ joined #gluster
04:32 kdhananjay joined #gluster
04:40 rjoseph joined #gluster
04:51 ndarshan joined #gluster
04:56 badone__ joined #gluster
05:05 spandit joined #gluster
05:09 badone joined #gluster
05:12 bala joined #gluster
05:12 vpshastry joined #gluster
05:16 bala joined #gluster
05:17 saurabh joined #gluster
05:20 prasanth joined #gluster
05:21 rastar joined #gluster
05:21 raghu joined #gluster
05:22 nshaikh joined #gluster
05:23 lalatenduM joined #gluster
05:26 shubhendu joined #gluster
05:26 RameshN_ joined #gluster
05:27 ppai joined #gluster
05:28 hagarth joined #gluster
05:33 DV joined #gluster
05:38 tyrfing_ joined #gluster
05:44 surabhi joined #gluster
05:50 mohankumar joined #gluster
06:05 dusmant joined #gluster
06:10 jag3773 joined #gluster
06:13 hchiramm_ joined #gluster
06:16 davinder joined #gluster
06:23 psharma joined #gluster
06:23 aravindavk joined #gluster
06:25 davinder joined #gluster
06:48 benjamin_ joined #gluster
06:51 tokik joined #gluster
06:52 vimal joined #gluster
06:56 tyrfing_ joined #gluster
07:03 harish joined #gluster
07:14 samppah purpleidea: ping?
07:15 jtux joined #gluster
07:16 samppah @puppet
07:16 glusterbot samppah: https://github.com/purpleidea/puppet-gluster
07:16 CheRi joined #gluster
07:18 samppah i guess there is no easy way to use iptables to control access to Glusterfs NFS exports?
07:26 yhben joined #gluster
07:27 yhben Brick(s) with the peer server1  exist in cluster  ... what can i do to delete peer
07:31 ktosiek joined #gluster
07:32 lalatenduM yhben, to detach a peer here is the command "gluster peer detach <HOSTNAME> [force] - detach peer specified by <HOSTNAME>"
07:33 lalatenduM yhben, however I am not sure if you really want to remove a peer
07:35 yhben thanks, lalatenduM.
07:35 lalatenduM yhben, np
07:39 rossi_ joined #gluster
07:48 DV joined #gluster
07:51 glusterbot New news from newglusterbugs: [Bug 1062522] glusterfs: failed to get the 'volume file' from server <https://bugzilla.redhat.com/show_bug.cgi?id=1062522>
08:03 eseyman joined #gluster
08:05 ekuric joined #gluster
08:05 jporterfield joined #gluster
08:10 mohankumar joined #gluster
08:12 shyam joined #gluster
08:13 ctria joined #gluster
08:18 blook joined #gluster
08:18 shyam joined #gluster
08:41 mohankumar joined #gluster
08:47 ctria joined #gluster
08:49 keytab joined #gluster
08:50 haomaiwa_ joined #gluster
08:51 marbu joined #gluster
08:54 ctria joined #gluster
08:57 Norky joined #gluster
08:58 mgebbe_ joined #gluster
09:03 hybrid512 joined #gluster
09:13 bharata-rao joined #gluster
09:23 liquidat joined #gluster
09:25 ctria joined #gluster
09:32 GabrieleV joined #gluster
09:35 andreask joined #gluster
09:38 Slash joined #gluster
09:39 baoboa joined #gluster
09:39 RameshN_ joined #gluster
09:39 RameshN joined #gluster
09:40 Oneiroi joined #gluster
09:45 RedShift joined #gluster
09:48 shubhendu joined #gluster
09:49 kanagaraj_ joined #gluster
09:52 matclayton joined #gluster
10:10 RameshN joined #gluster
10:12 shylesh joined #gluster
10:12 bharata-rao joined #gluster
10:13 RameshN_ joined #gluster
10:17 dusmant joined #gluster
10:19 SteveCoo1ing Hi, I think I'm seeing some kind of memory leak on 3/4 of my nodes. I was hoping 3.4.2 would fix it, but it does not seem to. And pointers to how I can narrow it down?
10:19 SteveCoo1ing Here's a memory usage plot: https://dl.dropboxusercontent.com/u/683331/memory-pinpoint.png
10:26 andreask joined #gluster
10:28 shubhendu joined #gluster
10:28 jporterfield joined #gluster
10:36 hybrid512 joined #gluster
10:36 calum_ joined #gluster
10:39 TvL2386 joined #gluster
10:40 jmarley joined #gluster
10:45 ctria joined #gluster
10:51 DV joined #gluster
10:53 hagarth joined #gluster
11:01 Oneiroi joined #gluster
11:05 matclayton joined #gluster
11:21 glusterbot New news from newglusterbugs: [Bug 1060104] glusterfsd brick process crash : getxattr generates SEGV while fetching glusterfs.ancestry.path key <https://bugzilla.redhat.com/show_bug.cgi?id=1060104>
11:36 edward1 joined #gluster
11:37 ctria joined #gluster
11:40 keytab joined #gluster
11:42 rjoseph left #gluster
11:48 ndarshan joined #gluster
11:55 recidive joined #gluster
12:04 lalatenduM joined #gluster
12:11 mattapperson joined #gluster
12:17 burn420 joined #gluster
12:17 burn420 I seem to be experiencing some kind of memory or cpu leak caused by gluster-fuse
12:18 burn420 and ftp
12:18 burn420 tried both proftpd and vsftpd and get these kernel messages causing the ftp session to hang
12:19 burn420 INFO: task proftpd:7831 blocked for more than 120 seconds.
12:19 burn420 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
12:19 burn420 proftpd       D 0000000000000000     0  7831   4384 0x00000004
12:19 burn420 ffff88031f3bde38 0000000000000082 0000000000000000 ffffffff81051439
12:19 burn420 ffff88031f3bddc8 0000000300000000 ffff88031f3bddd8 ffff880335936040
12:19 burn420 ffff8803365cfab8 ffff88031f3bdfd8 000000000000fb88 ffff8803365cfab8
12:19 burn420 Call Trace:
12:19 burn420 [<ffffffff81051439>] ? __wake_up_common+0x59/0x90
12:19 burn420 [<ffffffffa0213075>] fuse_request_send+0xe5/0x290 [fuse]
12:20 burn420 [<ffffffff81096da0>] ? autoremove_wake_function+0x0/0x40
12:20 burn420 [<ffffffffa0218b36>] fuse_flush+0x106/0x140 [fuse]
12:20 burn420 [<ffffffff8117e0bc>] filp_close+0x3c/0x90
12:20 burn420 [<ffffffff8117e1b5>] sys_close+0xa5/0x100
12:20 burn420 [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
12:20 burn420 INFO: task proftpd:7831 blocked for more than 120 seconds.
12:20 burn420 INFO: task proftpd:7831 blocked for more than 120 seconds.
12:20 burn420 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
12:20 burn420 proftpd       D 0000000000000000     0  7831   4384 0x00000004
12:20 burn420 ffff88031f3bde38 0000000000000082 0000000000000000 ffffffff81051439
12:20 diegows joined #gluster
12:20 burn420 ffff88031f3bddc8 0000000300000000 ffff88031f3bddd8 ffff880335936040
12:20 burn420 ffff8803365cfab8 ffff88031f3bdfd8 000000000000fb88 ffff8803365cfab8
12:20 burn420 Call Trace:
12:20 burn420 [<ffffffff81051439>] ? __wake_up_common+0x59/0x90
12:20 burn420 [<ffffffffa0213075>] fuse_request_send+0xe5/0x290 [fuse]
12:20 burn420 [<ffffffff81096da0>] ? autoremove_wake_function+0x0/0x40
12:20 burn420 [<ffffffffa0218b36>] fuse_flush+0x106/0x140 [fuse]
12:20 burn420 [<ffffffff8117e0bc>] filp_close+0x3c/0x90
12:20 burn420 [<ffffffff8117e1b5>] sys_close+0xa5/0x100
12:20 burn420 [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
12:20 burn420 INFO: task proftpd:7831 blocked for more than 120 seconds.
12:20 burn420 INFO: task proftpd:7831 blocked for more than 120 seconds.
12:20 burn420 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
12:20 burn420 proftpd       D 0000000000000000     0  7831   4384 0x00000004
12:20 burn420 ffff88031f3bde38 0000000000000082 0000000000000000 ffffffff81051439
12:20 burn420 ffff88031f3bddc8 0000000300000000 ffff88031f3bddd8 ffff880335936040
12:20 burn420 ffff8803365cfab8 ffff88031f3bdfd8 000000000000fb88 ffff8803365cfab8
12:21 burn420 Call Trace:
12:21 lalatenduM burn420, you should use http://fpaste.org/ for pasting big outputs
12:21 burn420 [<ffffffff81051439>] ? __wake_up_common+0x59/0x90
12:21 glusterbot Title: New paste Fedora Project Pastebin (at fpaste.org)
12:21 burn420 [<ffffffffa0213075>] fuse_request_send+0xe5/0x290 [fuse]
12:21 burn420 [<ffffffff81096da0>] ? autoremove_wake_function+0x0/0x40
12:21 burn420 [<ffffffffa0218b36>] fuse_flush+0x106/0x140 [fuse]
12:21 burn420 [<ffffffff8117e0bc>] filp_close+0x3c/0x90
12:21 burn420 [<ffffffff8117e1b5>] sys_close+0xa5/0x100
12:21 burn420 [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
12:21 burn420 INFO: task proftpd:7831 blocked for more than 120 seconds.
12:21 burn420 INFO: task proftpd:7831 blocked for more than 120 seconds.
12:21 burn420 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
12:21 burn420 proftpd       D 0000000000000000     0  7831   4384 0x00000004
12:21 burn420 ffff88031f3bde38 0000000000000082 0000000000000000 ffffffff81051439
12:21 burn420 ffff88031f3bddc8 0000000300000000 ffff88031f3bddd8 ffff880335936040
12:21 burn420 ffff8803365cfab8 ffff88031f3bdfd8 000000000000fb88 ffff8803365cfab8
12:21 burn420 Call Trace:
12:21 burn420 [<ffffffff81051439>] ? __wake_up_common+0x59/0x90
12:21 burn420 [<ffffffffa0213075>] fuse_request_send+0xe5/0x290 [fuse]
12:21 burn420 [<ffffffff81096da0>] ? autoremove_wake_function+0x0/0x40
12:21 burn420 [<ffffffffa0218b36>] fuse_flush+0x106/0x140 [fuse]
12:21 burn420 [<ffffffff8117e0bc>] filp_close+0x3c/0x90
12:21 burn420 [<ffffffff8117e1b5>] sys_close+0xa5/0x100
12:21 burn420 [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
12:21 burn420 INFO: task proftpd:7831 blocked for more than 120 seconds.
12:21 burn420 INFO: task proftpd:7831 blocked for more than 120 seconds.
12:21 burn420 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
12:21 burn420 proftpd       D 0000000000000000     0  7831   4384 0x00000004
12:21 burn420 ffff88031f3bde38 0000000000000082 0000000000000000 ffffffff81051439
12:21 burn420 ffff88031f3bddc8 0000000300000000 ffff88031f3bddd8 ffff880335936040
12:22 burn420 ffff8803365cfab8 ffff88031f3bdfd8 000000000000fb88 ffff8803365cfab8
12:22 burn420 Call Trace:
12:22 burn420 [<ffffffff81051439>] ? __wake_up_common+0x59/0x90
12:22 burn420 [<ffffffffa0213075>] fuse_request_send+0xe5/0x290 [fuse]
12:22 burn420 [<ffffffff81096da0>] ? autoremove_wake_function+0x0/0x40
12:22 burn420 [<ffffffffa0218b36>] fuse_flush+0x106/0x140 [fuse]
12:22 burn420 [<ffffffff8117e0bc>] filp_close+0x3c/0x90
12:22 burn420 [<ffffffff8117e1b5>] sys_close+0xa5/0x100
12:22 burn420 [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
12:22 burn420 INFO: task proftpd:7831 blocked for more than 120 seconds.
12:22 burn420 oops
12:22 burn420 I meant to paste it once...
12:22 burn420 it seems to cause ftp to hang forever and I cannot kill it the only way to fix it is reboot the server I guess I am going to open a bug report if I can find where to do so
12:22 burn420 yeah I don't know why it pasted it more than once
12:22 burn420 lol pasted in paste and it pasted it once lol
12:23 burn420 http://fpaste.org/75237/
12:23 glusterbot Title: #75237 Fedora Project Pastebin (at fpaste.org)
12:26 lalatenduM burn420, I am not familiar with fuse , I will suggest you to send a mail to gluster-users with your issue http://www.gluster.org/interact/mailinglists/
12:26 itisravi joined #gluster
12:26 burn420 gluster-fuse
12:26 burn420 thanks I will try that
12:26 ctria joined #gluster
12:27 lalatenduM burn420, gluster-fuse is actually fuse packaged inside gluster
12:28 kdhananjay joined #gluster
12:32 rwheeler joined #gluster
12:32 burn420 right
12:32 burn420 so who makes that pack ?
12:32 burn420 should I send to the mailing list still? I think its a bug in fuse
12:32 burn420 but not sure
12:33 burn420 I saw someone had a similar issue with fuse
12:33 burn420 not exactly the same but similar
12:33 burn420 I am typing up an email now to the list
12:33 lalatenduM burn420, you can get the source code of glusterfs and look for "contrib" directory
12:34 lalatenduM burn420, inside "contrib" you will see fuse related directories
12:34 burn420 ok
12:34 burn420 I will take a look
12:35 lalatenduM burn420, I think we should send a mail abt to the mailing list so that somebody can fix the issue even if it is in fuse module
12:35 burn420 I am typing up the email now with all the information...
12:35 burn420 I appreciate your help...
12:35 lalatenduM burn420, np :)
12:38 kkeithley joined #gluster
12:46 mattappe_ joined #gluster
12:48 purpleidea samppah: pong/
12:54 shubhendu joined #gluster
12:58 bennyturns joined #gluster
12:59 _Bryan_ joined #gluster
13:10 hchiramm_ joined #gluster
13:13 ctria joined #gluster
13:15 vpshastry left #gluster
13:15 chirino joined #gluster
13:18 plarsen joined #gluster
13:20 mattappe_ joined #gluster
13:20 Ark_explorys joined #gluster
13:23 ira joined #gluster
13:31 andreask joined #gluster
13:36 hchiramm_ joined #gluster
13:41 ngoswami joined #gluster
13:41 DV joined #gluster
13:44 mattappe_ joined #gluster
13:48 nikk_ hm
13:48 nikk_ - instance #blahblah [WARNING] collected 100 metrics
13:48 nikk_ jmx check pulling more metrics than dd can handle?
13:48 nikk_ wow wrong channel ><
13:51 wcchandler joined #gluster
13:52 matclayton joined #gluster
13:55 nik_ joined #gluster
13:55 shyam joined #gluster
13:56 n1k joined #gluster
13:59 nikk_ joined #gluster
13:59 wcchandler i'm sure this gets asked frequently, but is it possible to deploy gluster on a single node with a local raid-1 (like) setup? or should I "fake it" with 2 virtual machines on the same node?  or should i just go with lvm?
14:02 kkeithley Yes, you can definitely use gluster on a single node, with a single raid-1 brick.
14:02 ktosiek joined #gluster
14:03 kkeithley It's not really buying you anything though. Are you going to use NFS? Why not just use regular NFS on the raid-1 volume?
14:05 vpshastry joined #gluster
14:05 wcchandler kkeithley: That was the plan.  I *could* just do kNFS but the plan is to buy more nodes and add to the pool over time.
14:06 japuzzo joined #gluster
14:06 kkeithley if that's your plan, then yes, you can start with the single node and grow it later.
14:06 wcchandler kkeithley: wonderful, thank you :)  btw, your documentation is a pleasure to read
14:07 kkeithley glad you like it.
14:08 rfortier1 joined #gluster
14:10 japuzzo joined #gluster
14:11 jmarley joined #gluster
14:15 nikk_ i wasn't able to find it in the documentation anywhere, but i'm wondering what the difference between replicated and distributed-replicated is
14:15 mattappe_ joined #gluster
14:17 Ark_explorys distributed-replicated is a set of 4 bricks that you can spread data onto. If each brick was 1TB you would have 2TB of usable space
14:18 Ark_explorys think like 2 replicated volumes then running raid over them
14:18 vpshastry left #gluster
14:18 nikk_ if i have two hosts w/1tb each, and i have 1.5tb of data, what happens when i lose one of the hosts?
14:18 Ark_explorys since you only have 2 host you will not be able to write to the volume
14:19 nikk_ it seems like the most fault-tollerant type is plain replicated then
14:19 nikk_ raw disk isn't an issue for me, just fault tollerance :)
14:20 Ark_explorys depends on space for volumes. If you only have 2 servers I think replicated sounds good
14:20 lalatenduM nikk_, I believe you are looking for this http://www.gluster.org/wp-content/uploads/2012/05/Gluster_File_System-3.3.0-Administration_Guide-en-US.pdf :)
14:20 mattapperson joined #gluster
14:20 Ark_explorys ^^
14:20 lalatenduM nikk_, Ark_explorys I think the admin guide has best explanation about different types of volumes :)
14:21 nikk_ it'll probably be 8 clients, not sure if i should just run two separate servers and make the other 8 clients.  i'm new to this obviously.
14:23 FrodeS self-replication on 3.4.2 - if I delete a file directly on one of the bricks (big no-no, I know) - is there a reason for why it will fail with "no such or directory" on self-heal while if I create an empty file it will self-heal successfully
14:24 nikk_ Ark_explorys, lalatenduM: thanks for the help, i'll look through the guide.  i'm setting up a new batch of servers for web content, just want to make sure i get everything done right the first time through.
14:24 lalatenduM FrodeS, because you deleted the file on the backend and gluster does not have any info on that, gluster maintains info abt everything in glusterfs separately
14:25 nikk_ biggest thing i wasn't sure of is if every host should be a server+client or if i should have separate servers and make all the front-end hosts clients
14:25 nikk_ i've seen both recommended
14:25 lalatenduM nikk_, in a ideal environment servers and clients should be different
14:25 dbruhn joined #gluster
14:25 lalatenduM nikk_, because you dont want to give client issues to server
14:26 nikk_ right
14:26 nikk_ these will be cross-datacenters, just don't want to get split-brain if i lose a wan link
14:26 lalatenduM nikk_, e.g. if any application causes a scenario where there is no memory , ur server will be effected too
14:26 nikk_ right
14:26 lalatenduM nikk_, yup
14:27 mattapperson joined #gluster
14:27 nikk_ cool, thank you :]
14:27 B21956 joined #gluster
14:27 lalatenduM nikk_, welcome :)
14:27 nikk_ i'm a little frightened about using gluster over something like nfs + cached or just putting varnish over top of nfs
14:28 nikk_ just gotta move out of that comfort zone
14:28 nikk_ the guide is for 3.3.0, i guess the concepts are all the same, not sure if there's one for 3.4.x?
14:29 lalatenduM nikk_, this might interest you http://stackoverflow.com/questions/16095450/15-million-static-files-shared-via-nfs
14:29 glusterbot Title: html - 15 million static files shared via NFS - Stack Overflow (at stackoverflow.com)
14:29 nikk_ hmm
14:29 lalatenduM nikk_, yes, the guide is for 3.3 but concepts mostly universal
14:30 lalatenduM for glusterfs
14:30 nikk_ cool
14:38 FrodeS lalatenduM: so it's per design? the only way to fix such issues is to manually run a full self-heal?
14:43 haomaiwa_ joined #gluster
14:43 lalatenduM FrodeS, Not sure , GlusterFS as filesystem maintains data structures which typically a fs maintains ..so if you do something in backend which contradicts its information , I think theoretically self-heal should be able to fix it but if not sure about it. But it is not that direct :)
14:44 lalatenduM FrodeS, you can try doing a force and if it heals :)
14:44 itisravi joined #gluster
14:45 lalatenduM s/if/check if/
14:45 glusterbot What lalatenduM meant to say was: FrodeS, you can try doing a force and check if it heals :)
14:46 johnmilton joined #gluster
14:46 cyberbootje joined #gluster
14:49 lalatenduM FrodeS, I think you meant "volume heal <VOLNAME> full" by "force"
14:49 X3NQ joined #gluster
14:54 failshell joined #gluster
14:58 shylesh joined #gluster
15:03 recidive joined #gluster
15:03 ekuric joined #gluster
15:04 tjikkun_work joined #gluster
15:07 sarkis joined #gluster
15:10 zaitcev joined #gluster
15:13 dusmant joined #gluster
15:17 shyam joined #gluster
15:17 plarsen joined #gluster
15:26 mattapp__ joined #gluster
15:28 vpshastry1 joined #gluster
15:28 vpshastry1 left #gluster
15:43 FrodeS lalatenduM: doing volume heal full will fix it - but it is a bit strange that the background heal triggered by stat works as long as there is an empty (touch file) there
15:43 FrodeS but not if the file isn't there
15:43 FrodeS as in, it will self-heal automatically with an empty file - if you remove the empty file it will not
15:44 lalatenduM FrodeS, I think self-heal automatically only checks the metadata ,
15:45 mattap___ joined #gluster
15:46 FrodeS lalatenduM: ok, thanks!
15:51 bugs_ joined #gluster
15:55 edward2 joined #gluster
16:03 hagarth joined #gluster
16:05 bet_ joined #gluster
16:07 vpshastry joined #gluster
16:17 jag3773 joined #gluster
16:22 jobewan joined #gluster
16:33 ndk joined #gluster
16:33 ekuric left #gluster
16:49 vpshastry left #gluster
16:52 glusterbot New news from newglusterbugs: [Bug 1062674] Write is failing on a cifs mount with samba-4.1.3-2.fc20 + glusterfs samba vfs plugin <https://bugzilla.redhat.com/show_bug.cgi?id=1062674>
16:57 mattappe_ joined #gluster
16:59 Ark_explorys any idea how to fix a umount -f on a gluster client that was using the mount? Kind of like if nfs fails
17:00 dbruhn Ark_explorys, you mean it won't unmount?
17:00 dbruhn umount -l
17:00 Ark_explorys no it was mounted, I had something using it and then  I ran umount -f to simulate gluster failing
17:01 dbruhn Not sure what the question is here?
17:01 Ark_explorys now mount still shows it but df -h shows df: `/media/gluster': Transport endpoint is not connected
17:01 Ark_explorys how can i reconnect it without restarting the box
17:02 dbruhn umount -l will disconnect it
17:02 dbruhn then you can remount it
17:02 dbruhn l stands for lazy
17:02 dbruhn it makes it not wait for a response to the disconnect
17:02 Ark_explorys redhat does not have umount -l
17:02 Ark_explorys i tried -f
17:02 Ark_explorys oh
17:02 Ark_explorys sorry
17:02 Ark_explorys lol
17:02 dbruhn I am running red hat and have used that command
17:03 Norky RHEL does have umount -l
17:03 dbruhn hahah
17:03 dbruhn ok
17:03 Ark_explorys dam you are the man
17:03 Ark_explorys i did not do the full command
17:03 Norky RHEL 6 does anyway, don't have an older version to check
17:03 mattapperson joined #gluster
17:03 Ark_explorys dbruhn: thank you again
17:03 dbruhn np
17:04 Ark_explorys you are a endless pit of information
17:04 dbruhn I am a shallow pool of a few bits of information, the endless pits are the quiet guys around here
17:04 Norky I have an interesting (read "mad") problem
17:05 dbruhn What's up Norky?
17:05 Norky text files that are updated (appended) on Windows (using samba VFS plugin) do not 'appear' updated on Linux clients
17:06 Norky if I compare md5sums generated on Windows against those directly on the bricks, they match
17:06 dbruhn i ran into that issue back in the 2.x days
17:07 dbruhn the fix then was to adjust client side caching for small files
17:07 dbruhn now... I have no idea
17:08 Norky if I examine the files using something like vim on a Linux client it will show the correct number of bytes, but the additions are all apparently non-printable characters
17:10 Norky hmmm, so it might be a caching error?
17:10 dbruhn Not sure, what do you mean by non printable characters?
17:12 Norky https://www.dropbox.com/s/luhreor2fsya3pq/Screenshot%20from%202014-02-05%2017%3A13%3A49.png
17:12 glusterbot Title: Dropbox - Screenshot from 2014-02-05 17:13:49.png (at www.dropbox.com)
17:12 KyleG joined #gluster
17:12 KyleG joined #gluster
17:12 dbruhn If you edit the file from linux and then open it in windows, what happens then?
17:13 Norky so far as I've seen, that works correctly
17:13 dbruhn I'm thinking gluster isn't your issues, but something in the translation
17:14 dbruhn take gluster out of the picture, move one of the suspect files to linux a different way and see if the issue follows
17:15 KyleG left #gluster
17:15 Norky eh?
17:15 Norky even though the file contents/checksums differ between the brick, and the file viewed through gluster?
17:16 dbruhn agh, sorry I didn't realize the checksums were differing
17:16 Norky Windows checksum and Linux brick checksum are equal
17:16 Norky Linux client checksum is different
17:17 dbruhn fuse client or nfs?
17:17 Norky fuse
17:18 Norky I can temporarily fix it by unmounting it and remounting
17:18 Norky but any subsequent edits made on a Windows CIFS client lead me back to the same error
17:18 Norky I'll try NFS mount
17:19 dbruhn yep, sec I am trying to see if that old cache setting is still available
17:20 recidive joined #gluster
17:20 tdasilva joined #gluster
17:21 Norky hmm, I've foudn something else out, while the Linuc client sees the correct file size in the metadata (e.g. ls -l), wc, for example can only read the old 'size'
17:22 dbruhn This disabled the performance cache in 3.1 "gluster volume mirror set performance.quick-read off"
17:22 dbruhn but I am not sure it's currently a setting you can adjust
17:22 zerick joined #gluster
17:22 Norky hmm, maybe I'm misinterpreting wc
17:22 dbruhn what version are you running
17:23 Norky [dan@lnaapp100 ~]$ wc -c /tid/tests/comparison.txt
17:23 Norky 147 /tid/tests/comparison.txt
17:23 Norky [dan@lnaapp100 ~]$ wc -m /tid/tests/comparison.txt
17:23 Norky 10 /tid/tests/comparison.txt
17:23 Norky [dan@lnaapp100 ~]$ wc -m -c /tid/tests/comparison.txt
17:23 Norky 10  10 /tid/tests/comparison.txt
17:23 Norky pardon the large paste but I want someone else to see that
17:23 Norky glusterfs-3.4.0.57rhs-1.el6_5.x86_64
17:24 Norky I'm using Red Hat Storage, so I will be opening a ticket with RH shortly
17:24 dbruhn Ahh, yeah, thats why you pay for support, so you don't have to deal with us ;)
17:24 dbruhn I would be interested to see the resolution
17:25 Norky I'll try to drop a link to th3e case in here once it's done
17:26 Norky actually, it wont be public, I'll copy and paste it somewhere
17:26 rotbeard joined #gluster
17:29 Mo_ joined #gluster
17:30 cp0k joined #gluster
17:31 Norky right, it's working fine using NFS3
17:34 dbruhn Then I bet it's caching the small files on the client side.
17:35 surabhi joined #gluster
17:35 Norky this is happening for any clients, even ones which have not recently (ever?) accessed the small files, I think
17:36 Norky but I'm inclined to agree
17:36 Norky need to do more testing
17:36 Norky thank you for your advice, I'll let yopu know how REd HAt respond
17:36 Norky though don't hold your breath, I'm on holiday for a week ;)
17:37 dbruhn If I'm holding my breath it has nothing to do with gluster!
17:38 Norky :D
17:38 Norky cheerio
17:40 in joined #gluster
17:41 in Hi guys, I am new here but have been working with Gluster for over a year now in production.
17:41 dbruhn welcoem cp0k
17:42 dbruhn s/welcoem/welcome
17:42 cp0k I am at that magical point of having to upgrade / expand the storage :)
17:42 johnmark cp0k: cool. welcome! :)
17:42 cp0k thanks!
17:42 cp0k 127.0.0.1:/storage            189T  159T   20T  89% /storage
17:43 cp0k glusterfs 3git built on Apr 19 2013 12:41:12
17:43 theron joined #gluster
17:43 dbruhn Sweet! So... what's the first question?
17:43 cp0k the plan is to upgrade to Gluster 3.4.1, wait a day or two to ensure that everything is working as it should and then add the additional new servers / bricks
17:44 shyam joined #gluster
17:44 cp0k I plan on going by the instructions located here: http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4/
17:44 dbruhn what version are you coming from?
17:44 cp0k glusterfs 3git built on Apr 19 2013 12:41:12
17:45 cp0k it was a git version I used since at the time there was a bug I ran into, unfortunately I do not recall what the bug was
17:45 dbruhn what does "gluster --version" output
17:45 dbruhn or is that what it outputs?
17:45 johnmark ANNOUNCEMENT: Gluster Spotlight on Citrix and Harvard's FASRC starts at the top of the hour
17:46 johnmark see #gluster-meeting for Q&A
17:46 johnmark see ANNOUNCEMENT: Gluster Spotlight on Citrix and Harvard's FASRC starts at the  top of the hour
17:46 cp0k # glusterd --version
17:46 cp0k glusterfs 3git built on Apr 19 2013 12:41:12
17:46 cp0k Repository revision: git://git.gluster.com/glusterfs.git
17:46 johnmark see http://www.gluster.org/2014/02/gluster-spotlight-new-members-expanded-board/ for video feed
17:46 cp0k that is what it outputs
17:47 dbruhn what about just gluster not glusterd
17:47 cp0k same result
17:47 dbruhn weird
17:48 cp0k yea
17:49 dbruhn the reason I say that, is I would expect some sort of point release information
17:49 cp0k so I plan on using the instructions over at http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4/ to upgrade to 3.4.1
17:49 cp0k ditto, but unfortunately it is not there
17:49 cp0k it was compiled from source
17:50 cp0k in the Makefile, I see
17:50 cp0k # Tell versions [3.59,3.63) of GNU make to not export all variables.
17:50 shyam joined #gluster
17:51 cp0k at any rate, I have tested performing the upgrade in a staging environment and everything worked out good
17:51 cp0k now comes time to upgrade the production, and for that reason I am making sure that I cross check everything before hand
17:53 dbruhn Well if it
17:53 avati joined #gluster
17:54 dbruhn is working well in test, them you should be good
17:54 dbruhn you'll want to go to 3.4.2 I think
17:54 cp0k so going the route of scheduling a downtime, it seems very straight forward....stop gluster on the storage nodes, then stop it all the clients, backup /var/lib/glusterd just in case, emerge gluster 3.4.1 ( I am running Gentoo ) and fire glusterd back up on the storage nodes then the clients
17:55 dbruhn Yep
17:55 cp0k unmounting and remouting the mount point
17:55 cp0k 3.4.2 is stable correct?
17:55 cp0k I havent had time to check yet :)
17:57 lalatenduM joined #gluster
17:58 marcoceppi joined #gluster
17:58 marcoceppi joined #gluster
18:00 cp0k now after I am successfully upgraded to 3.4.2, will Gluster adapt to the new version all on its own? or are there additional steps I must take to ensure that? I do not see any additional steps listed on the upgrade page
18:00 dbruhn sorry sec
18:00 cp0k since I am not familiar with all the internals of Gluster, it is important that I do not overlook anything :)
18:01 cp0k no worries dbruhn, take your time
18:02 dbruhn cp0k, the reason I was asking about versions is to make sure you aren't on a pre 3.3 versions
18:02 dbruhn otherwise you need to follow the steps to upgrade from an earlier version
18:02 dbruhn which are listed
18:03 dbruhn http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/
18:05 cp0k dbruhn, I was thinking the exact same thing
18:06 cp0k any ideas where else I may be able to look to see what version I am running?
18:06 plarsen joined #gluster
18:07 cp0k under Gentoo, when I run a pretend emerge of gluster 3.4.2, I am seeing the following:
18:07 cp0k [ebuild     U ~] sys-cluster/glusterfs-3.4.2 [3.3.0]
18:08 cp0k so portage seems to think that I am currently running 3.3.0
18:09 dbruhn cp0k, check and see if there is a .glusterfs directory at the root of your bricks
18:09 dbruhn if you are @ 3.3 or above that will exist
18:10 vpshastry joined #gluster
18:10 vpshastry left #gluster
18:11 cp0k yes, there is in fact a .glusterfs dir in the root of every brick
18:11 dbruhn perfect, then it should be fairly straight forward as a 3.3+ upgrade
18:11 cp0k oh sweet!
18:12 cp0k so the .glusterfs dirs were introduced as of 3.3+ versions?
18:12 dbruhn yep
18:14 hchiramm__ joined #gluster
18:14 cp0k great! thanks dbruhn
18:14 dbruhn np
18:17 cp0k Looks like I will def be hanging out in this channel from now on :)
18:17 cp0k and will be sure to update you guys on the results of the upgrade
18:19 dbruhn Good deal!
18:19 cp0k I assume there are a number of key players of Gluster hanging out in this channel? dbruhn are you one of them?
18:20 dbruhn There are a lot of people who contribute in there, I am just some dude with his own systems trying to give back for the help I have received
18:20 cp0k heh awesome! so looks like you and I are actually in the same boat
18:21 dbruhn most the guys in this channel are users of the system
18:21 cp0k what version are you running?
18:21 dbruhn 3.3.2
18:22 cp0k gotcha
18:23 cp0k I run Gluster for work, I am a sys admin at a web hosting company
18:23 dbruhn I am behind the curve on versions compared to most of the guys in here. I use RDMA which isn't as supported.
18:24 cp0k ah, gotcha, I was wondering why you were not on the latest :)
18:25 cp0k I am using Gluster mostly for offsite backup storage and for CDN origin
18:25 dbruhn I am in a weird spot, where I need to squeeze every read I/O I can out of the system
18:25 dbruhn I work for a company called offsitebackups.com ;)
18:25 dbruhn lol
18:26 cp0k heh I work for isprime.com :)
18:26 cp0k nice domain btw
18:26 dbruhn what backup solution are you running?
18:26 dbruhn Asigra? Evault?
18:28 cp0k we run most things in house
18:28 cp0k as simple as possible
18:28 cp0k and tend to stay away from third party software
18:28 dbruhn That's cool
18:29 cp0k basically staying away from proprietary and leaning more towards open source
18:29 dbruhn completely understood
18:30 cp0k Im watching the live stream right now, and one of the guys just mentioned Ceph
18:30 failshel_ joined #gluster
18:30 cp0k which is what I used before Gluster
18:30 cp0k it seemed nice, but boy was it a pita
18:30 dbruhn gahh, missed the start
18:33 cp0k rewind back to like 21:00 :)
18:37 dbruhn I will have to go back and watch the whole thing
18:41 cp0k cool
18:43 dbruhn be back in a bit, gotta take care of some stuff
18:44 cp0k have fun
18:45 bet_ joined #gluster
18:53 mattappe_ joined #gluster
18:53 bstr joined #gluster
18:55 cp0k joined #gluster
18:58 Ark_explorys joined #gluster
19:14 Ark_explorys joined #gluster
19:19 Ark_explorys joined #gluster
19:25 Ark_explorys joined #gluster
19:29 sprachgenerator joined #gluster
19:30 Ark_explorys joined #gluster
19:30 GabrieleV joined #gluster
19:36 diegows joined #gluster
19:51 dbruhn joined #gluster
19:54 JoeJulian Hehe: "Hi Joe. I tried reaching out to you regarding your interest in Ceph previously but have not been able to make further contact. May I ask if you are still interested in Ceph? I look forward to hearing from you."
19:55 failshell joined #gluster
19:56 NuxRo JoeJulian: have you actually tried CePH? :)
19:56 JoeJulian No. It doesn't (or at least didn't) meet my system requirements.
20:00 JoeJulian Ooh! I just thought of a really interesting idea... I wonder if it would work...
20:01 dbruhn lol
20:01 dbruhn Ceph under gluster?
20:03 JoeJulian Given a server with, say, 4 ethernet ports each with different IPs. rrdns the hostname for that server such that the clients will spread their connection load across all 4 ports, effectively quadrupling your throughput to the clients...
20:03 mattappe_ joined #gluster
20:03 JoeJulian ... without resorting to bonding...
20:04 JoeJulian Would even work with each port being on a different switch, allowing an additional layer of hardware redundancy.
20:04 JoeJulian - in theory -
20:05 dbruhn hmm interesting thought
20:05 dbruhn poor mans load balancing
20:06 dbruhn Would obviously be beneficial in environments with lots of clients
20:06 JoeJulian More or less, except with a load balancer wouldn't you still have a single point of failure in the load balancer?
20:06 plarsen joined #gluster
20:06 dbruhn unless you had redundant load balancers
20:07 dbruhn zen loadbalancer at least lets you create a cluster, I believe they have an active/active and active/passive config in the last release
20:07 JoeJulian yuck. I hate failover...
20:07 dbruhn lol
20:08 dbruhn Your idea is far less resource intensive though
20:08 dbruhn on setup, and delivery
20:12 recidive joined #gluster
20:18 Matthaeus joined #gluster
20:27 cp0k joined #gluster
20:35 B21956 joined #gluster
20:39 KyleG joined #gluster
20:39 KyleG joined #gluster
20:50 Gluster joined #gluster
20:53 Gluster howdy,  If using ext4 as the filesystem for bricks.  Does gluster use the quota files you specified in the usrjquota/grpjquota mount option for anything?
20:53 Gluster oops
20:54 rpowell howdy,  If using ext4 as the filesystem for bricks.  Does gluster use the quota files you specified in the usrjquota/grpjquota mount option for anything?
20:54 zerick joined #gluster
20:54 JoeJulian hehe
20:55 plarsen joined #gluster
20:57 rpowell joined #gluster
20:59 NuxRo JoeJulian: that's a nice idea! this would require quite low ttl hostnames for peers though
21:00 rwheeler joined #gluster
21:03 failshel_ joined #gluster
21:06 dneary joined #gluster
21:07 dbruhn_ joined #gluster
21:10 tdasilva joined #gluster
21:14 recidive joined #gluster
21:21 failshell joined #gluster
21:21 theron joined #gluster
21:36 sprachgenerator joined #gluster
21:48 neofob joined #gluster
22:03 LessSeen joined #gluster
22:12 sputnik13 joined #gluster
22:17 JoeJulian rpowell: Gah, I was in the middle of typing my response when I got called away from my desk. I didn't mean to just laugh at you and run. :D
22:19 JoeJulian rpowell: No. Gluster will not use the filesystem quota. It has it's own quota management.
22:20 B21956 joined #gluster
22:30 andreask joined #gluster
22:31 rpowell JoeJulian:  Ha,  Thanks.  I had a weird issue.  We use the NFS to mount gluster because we need ACL support.  I kicked of a quotacheck on one of my bricks.  And instantly started getting I/O errors for files on that brick via nfs.  Meanwhile fuse.gluster was working fine for those same files.  I remounted the drive without quotas and nfs started allowing me to access the files.
22:32 JoeJulian Why nfs instead of fuse with --acl ?
22:32 rpowell becuase i did not know fuse worked with --acl
22:33 rpowell please tell me thats a real thing
22:33 JoeJulian Er, mountoption acl
22:33 JoeJulian yeah
22:33 * rpowell smacks himeself
22:35 * nikk_ smacks rpowell
22:35 rpowell I honestly thought ACLs only worked on nfs
22:35 rpowell this makes my life so much easier
22:35 rpowell :)
22:35 JoeJulian yay
22:38 Ark_explorys joined #gluster
22:46 qdk joined #gluster
22:56 recidive joined #gluster
23:03 sarkis joined #gluster
23:10 mattappe_ joined #gluster
23:21 KyleG joined #gluster
23:21 KyleG joined #gluster
23:41 jporterfield joined #gluster
23:47 recidive joined #gluster
23:54 mattappe_ joined #gluster
23:57 jporterfield joined #gluster
23:58 atrius joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary