Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 kmai007 if i'm using glusterfs3.4-1 shouldn't the client logs also reflect that?
00:03 kmai007 1-devstatic-client-2: Using Program GlusterFS 3.3, is what is written when the client is able to talk to a brick
00:11 johnbot11 joined #gluster
00:12 [o__o] left #gluster
00:13 JoeJulian Would seem logical but I think that's more about which RPC protocol is being used.
00:15 johnbot11 joined #gluster
00:19 [o__o] joined #gluster
00:21 [o__o] left #gluster
00:24 [o__o] joined #gluster
00:26 SFLimey joined #gluster
00:26 [o__o] left #gluster
00:28 [o__o] joined #gluster
00:28 SFLimey joined #gluster
00:41 tg2 JoeJulian, tiered storage any time soon?
00:45 JoeJulian Closest I've got is tired storage....
00:51 theron joined #gluster
00:52 bala joined #gluster
00:53 vong_ joined #gluster
00:53 vong_ joined #gluster
01:12 jag3773 joined #gluster
01:16 gmcwhistler joined #gluster
01:26 psyl0n joined #gluster
01:45 keytab joined #gluster
02:28 hagarth joined #gluster
02:34 kshlm joined #gluster
02:39 gmcwhistler joined #gluster
02:45 saurabh joined #gluster
02:45 DV joined #gluster
02:47 _ilbot joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:59 bharata-rao joined #gluster
03:04 harish joined #gluster
03:16 zwu joined #gluster
03:35 rjoseph joined #gluster
03:43 kanagaraj joined #gluster
03:46 RameshN joined #gluster
03:48 zwu joined #gluster
04:04 itisravi joined #gluster
04:08 shylesh joined #gluster
04:21 jag3773 joined #gluster
04:25 ppai joined #gluster
04:30 meghanam joined #gluster
04:34 CLDSupportSystem joined #gluster
04:40 ndarshan joined #gluster
04:50 anands joined #gluster
04:51 prasanth joined #gluster
04:57 theron joined #gluster
04:59 dusmant joined #gluster
04:59 nshaikh joined #gluster
05:00 bala joined #gluster
05:07 raghu joined #gluster
05:11 MiteshShah joined #gluster
05:15 ababu joined #gluster
05:21 dhyan joined #gluster
05:23 aravindavk joined #gluster
05:30 psharma joined #gluster
05:30 zwu joined #gluster
05:34 vpshastry joined #gluster
05:41 AndreyGrebenniko joined #gluster
05:41 shruti joined #gluster
05:48 shyam joined #gluster
05:49 vshankar joined #gluster
06:05 pk_ joined #gluster
06:06 theron joined #gluster
06:07 pk_ xavih: Are you Xavier Hernandez?
06:11 kdhananjay joined #gluster
06:12 davidbierce joined #gluster
06:12 zeittunnel joined #gluster
06:16 theron joined #gluster
06:20 bulde joined #gluster
06:27 krypto joined #gluster
06:33 shubhendu joined #gluster
06:35 theron joined #gluster
06:35 dusmant joined #gluster
06:37 CheRi joined #gluster
06:47 theron joined #gluster
06:48 mohankumar joined #gluster
06:49 mohankumar joined #gluster
06:54 MiteshShah joined #gluster
06:55 aravindavk joined #gluster
06:55 RameshN joined #gluster
06:55 ndarshan joined #gluster
06:58 theron joined #gluster
07:09 vkoppad joined #gluster
07:10 theron joined #gluster
07:34 anands joined #gluster
07:38 ngoswami joined #gluster
07:42 jtux joined #gluster
07:42 ekuric joined #gluster
07:46 ninkotech joined #gluster
07:50 chirino_m joined #gluster
07:57 eryc joined #gluster
08:09 aravindavk joined #gluster
08:16 eseyman joined #gluster
08:19 REdOG joined #gluster
08:22 ndarshan joined #gluster
08:27 mbukatov joined #gluster
08:36 shubhendu joined #gluster
08:37 jag3773 joined #gluster
08:38 RameshN joined #gluster
08:41 dusmant joined #gluster
08:42 glusterbot New news from newglusterbugs: [Bug 1040844] glusterd process crashed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1040844>
08:53 nueces joined #gluster
08:55 psyl0n joined #gluster
08:56 XATRIX joined #gluster
08:57 andreask joined #gluster
08:57 XATRIX Hi guys, I'd like to know: I have 4x disks for my data storage. 2x on node 1 and 2 on the second
08:57 XATRIX Do i have to setup mdraid1 as /dev/md0. And mount the ext4 fs somewhere to. And later on export this fs as a gluster ?
08:58 XATRIX Or i can skip the local raid or use mdraid0
08:58 XATRIX As for speed up the device
08:59 XATRIX The second question is , can i export my /dev/md0 device filesystem without mounting it somewhere ?
09:02 XATRIX I mean, can i do something like #gluster create volume datastorage replica 2 transport tcp server1:/dev/md0 server2:/dev/md0
09:02 XATRIX Instead of mounting /dev/md0 to /mnt and lateron do the brick from sever1:/mnt
09:03 calum_ joined #gluster
09:12 glusterbot New news from newglusterbugs: [Bug 1040862] volume status detail command cause fd leak <https://bugzilla.redhat.co​m/show_bug.cgi?id=1040862>
09:15 MiteshShah joined #gluster
09:19 Alpinist joined #gluster
09:19 Staples84 joined #gluster
09:25 samppah XATRIX: normally you export filesystems with glusterfs.. however, there is also blockdevice translator which makes it possible to export also logical volumes
09:25 samppah i guess those are mainly for VM use
09:26 xavih pk_: yes, it's me :)
09:27 XATRIX Em... You mean i have to do a LVM group and export it with gluster between the nodes ?
09:28 theron joined #gluster
09:29 samppah XATRIX: no, i mean that you have to possibilites what to do.. the usual way is to export mounted filesystem
09:30 samppah another option is to use block device translator in glusterfs.. which make it possible to export logical volumes, but that's something that i haven't tested at all
09:30 samppah and i don't think that it's very common use case currently
09:30 ninkotech joined #gluster
09:30 ninkotech_ joined #gluster
09:34 XATRIX samppah: The trouble for me is that i have to setup a two-nodes cluster, and i setup a gluster-sever on node1 + node2
09:34 XATRIX So, i have to do
09:34 XATRIX mount /dev/md0 /mnt
09:35 theron joined #gluster
09:36 calum_ joined #gluster
09:36 XATRIX mkdir /storage && gluster volume create datastorage replica 2 transport tcp ve1-ua:/mnt ve2-ua:/mnt
09:36 XATRIX and later on, mount -t glusterfs ve1-ua:/mnt /storage
09:36 XATRIX It's not a good setup as for me
09:36 XATRIX Too many mounts
09:37 XATRIX Before it i did mdraid1->DRBD->cLVM->GFS2
09:37 XATRIX But it's extremely slow on network locks
09:37 XATRIX Gluster works pretty much faster for me
09:40 samppah XATRIX: why do you think that it's too many mounts?
09:41 samppah let the machines handle them :)
09:41 XATRIX Because i have to mount the device, mount share, mount the the share again as a client to gluster
09:42 theron joined #gluster
09:43 samppah XATRIX: you'll run gluster volume create just once
09:44 samppah after that you only have to take care that /dev/md0 is mounted
09:44 samppah and then use the client to mount gluster volume
09:44 XATRIX Yes, i mean i have to mount /dev/md0 , and mount the share as a client again
09:44 samppah yes
09:44 XATRIX But ok, it's not a problem
09:44 samppah good :)
09:45 XATRIX I simply was looking for a shorter way :)
09:55 XATRIX samppah: Also do i also create a mirror for ext4 and later volume replica ?
09:55 XATRIX Or i can do a raid stripe as for faster disk access and later replica this volume
09:56 samppah XATRIX: i personally prefer raid6 and raid10
09:57 samppah XATRIX: what kind of use case you have?
09:57 XATRIX Yes, and i prefer raid0-1 because of the money i have :)
09:57 samppah :D
09:57 samppah hehe
09:57 samppah i understand
09:57 * XATRIX is from Ukraine :)
09:58 samppah problem with raid0 is that if you loose one disk then the whole array is unusable and you have to wait that it's been replicated from other gluster node
09:58 samppah aand if disk breaks on that another node... ouch
09:59 XATRIX Yea, but in this case will gluster transparently handle the sync between the failed node, when i comes back online ?
10:01 XATRIX DRBD+GFS2 has a bad consensuses after the network fail
10:02 XATRIX It MUST be fenced out, because it can't handle split-brain by itself
10:02 XATRIX In dual-primary mode
10:02 XATRIX Primary/Primary
10:02 XATRIX What about gluster ? In case of poweroff-poweron the remote node
10:04 samppah XATRIX: it should self heal files that has been modified
10:05 vpshastry joined #gluster
10:05 samppah but it can't handle split brain either.. ie if link fails between nodes and both nodes write to file after that
10:09 XATRIX What's the way to fix split-brain on gluster ?
10:10 ababu joined #gluster
10:18 hurl joined #gluster
10:19 samppah XATRIX: you have to manually delete bad file
10:19 samppah there
10:20 samppah 's good documentation available at http://www.joejulian.name/blog/fix​ing-split-brain-with-glusterfs-33/
10:20 glusterbot Title: Fixing split-brain with GlusterFS 3.3 (at www.joejulian.name)
10:24 hurl hi all. I'm having lot of "no such file" errors in logs. I'm just wondering how to figure where is the problem and how to fix it
10:28 theron joined #gluster
10:33 badone joined #gluster
10:35 vpshastry1 joined #gluster
10:37 klaxa|web joined #gluster
10:37 klaxa|web in glusterfs 3.3.2 can i add bricks to a replica volume after i created it?
10:40 klaxa|web documentation suggests i can so i will assume just that :)
10:41 klaxa|web except i can't create a replica volume with just one brick...
10:43 klaxa|web we're upgrading our servers, one at a time, updating from 3.2.7 to 3.3.2
10:44 klaxa|web running into some problems at that
10:45 klaxa|web logs are filled with this: https://gist.github.com/anonymous/7926148
10:45 glusterbot Title: storage2-.log (at gist.github.com)
10:45 spandit joined #gluster
10:45 klaxa|web it seems like the variable 0-storage2-client-0 is an empty string
10:47 klaxa|web on a different setup we have the configration file trusted-storage-fuse.vol in which those seem to be defined, they are not present on the setup that is being upgraded
10:47 samppah klaxa|web: you should be able to create volume with gluster vol create volName server:/brick and then change it to replica with gluster vol add-brick volName replica2 server2:/brick
10:47 klaxa|web samppah: thanks i found that in the help for the gluster command too :) should have looked there first
10:48 klaxa|web however, i cannot create a replica volume with one brick (which makes sense in general) but we are upgrading the systems separately
10:48 klaxa|web we wanted to see if the first server will run stable with the new setup and only then upgrade the second server
10:49 samppah what's the actual command that is failing?
10:49 klaxa|web and with only one brick available i can't create any sane volume
10:49 klaxa|web gluster> volume create storage2 replica 1 transport rdma 10.0.0.4:/srv/glusterfs replica count should be greater than 1
10:49 samppah klaxa|web: try leaving replica 1 out of it
10:50 klaxa|web can i change the type afterwards?
10:50 samppah klaxa|web: that's possible with 3.4 at least
10:50 klaxa|web we're on 3.3.2 :<
10:51 klaxa|web according to the mailing-lists 3.4 has no stable rdma support
10:51 klaxa|web what would be the command in 3.4?
10:53 samppah klaxa|web: cli works the same way with 3.4
10:54 klaxa|web yes but maybe the command is not implemented in 3.3.2, so if the command in 3.4 is the same as in 3.3.2 i would assume that it was present in 3.3.2 too
10:58 hurl joined #gluster
10:59 klaxa|web the add-brick command seems to handle the type of brick you add, so i guess that will work
11:02 ricky-ti1 joined #gluster
11:03 badone joined #gluster
11:13 XATRIX Also, i'd like to know, if i mount a partition shared with gluster on my local node, and place a OpenVZ virtual server on it
11:13 XATRIX It will be fast as like if i would work with localhost ext4
11:13 XATRIX ?
11:19 ndarshan joined #gluster
11:31 badone joined #gluster
11:51 shubhendu joined #gluster
11:59 badone joined #gluster
12:00 gdubreui joined #gluster
12:12 glusterbot New news from newglusterbugs: [Bug 990089] do not unlink the gfid handle upon last unlink without checking for open fds <https://bugzilla.redhat.com/show_bug.cgi?id=990089>
12:16 katka joined #gluster
12:16 edward1 joined #gluster
12:17 CheRi joined #gluster
12:19 andreask joined #gluster
12:20 theron joined #gluster
12:24 ppai joined #gluster
12:34 theron_ joined #gluster
12:35 harish joined #gluster
12:38 sheldonh joined #gluster
12:39 sheldonh if i run glusterfs replica across two servers, with quorum-type None, and they go splitbrain... if foo.txt was written on both, during splitbrain, what happens when the two servers find each other again?
12:41 sheldonh my application would love it if gluster just let the newest copy of the file win
12:53 jskinner_ joined #gluster
12:55 kdhananjay joined #gluster
13:06 zeittunnel joined #gluster
13:09 tjikkun_work joined #gluster
13:13 badone joined #gluster
13:23 CheRi joined #gluster
13:32 DV joined #gluster
13:35 anands joined #gluster
13:43 glusterbot New news from newglusterbugs: [Bug 1041109] structure needs cleaning <https://bugzilla.redhat.co​m/show_bug.cgi?id=1041109>
13:47 XATRIX Do i have to use gluster in FUSE ?
13:47 XATRIX My proxmox mounts the gluster partition via FUSE
13:49 dbruhn joined #gluster
13:51 Cenbe joined #gluster
13:58 gmcwhistler joined #gluster
14:04 bennyturns joined #gluster
14:08 hurl joined #gluster
14:08 gmcwhistler joined #gluster
14:10 coxy82 joined #gluster
14:10 bala joined #gluster
14:14 tqrst XATRIX: fuse is the recommended way, but you can also use nfs
14:15 kshlm joined #gluster
14:21 XATRIX tqrst: no no you didn't get me
14:21 XATRIX I don't want to use NFS, because if so, i would install NFS
14:21 XATRIX And i don't want fuse
14:21 XATRIX Because it is cpu hungry
14:22 harish joined #gluster
14:22 sheldonh XATRIX: what do you think the difference is between a gnfs mount and a non-fuse gluster mount?
14:22 calum_ joined #gluster
14:25 vkoppad joined #gluster
14:34 CLDSupportSystem joined #gluster
14:34 sroy_ joined #gluster
14:36 XATRIX sheldonh: no idea :(
14:36 XATRIX can you explain ?
14:37 sheldonh XATRIX: well i'm not sure why you are discounting tqrst's suggestion, when it seems to offer what you want (non-FUSE gluster mount)
14:38 sheldonh XATRIX: so i'm just trying to better understand what you don't like about gnfs
14:38 kkeithley1 joined #gluster
14:38 tqrst (also, is fuse really that cpu hungry?)
14:39 XATRIX Sure
14:39 kkeithley1 left #gluster
14:39 XATRIX While uploading a big file to the directory, i have >~60% +\_ cpu
14:40 sheldonh wow. that doesn't sound right
14:40 sheldonh what process / kthread is chowing cpu like that?
14:40 XATRIX sheldonh: i think gnfs is an analog of NFS access mount
14:40 XATRIX If you gonna ride this way, maybe better to use NFS filesystem ?
14:40 sheldonh XATRIX: i think that's very much what gnfs is -- nfs, with a lot less sucking involved :)
14:40 XATRIX So, i'd like to find a fastest way without FUSE
14:41 sheldonh XATRIX: i think a) you should investigate the insane CPU usage deeper, and b) gnfs is the answer to the immediate question you're asking
14:41 kmai007 joined #gluster
14:42 sheldonh XATRIX: i would be very interested to hear which process or kernel thread is using so much CPU when that happens
14:42 sheldonh XATRIX: 'cause we haven't been able to notice gluster-fuse. but then maybe our network is too much of a bottleneck. you running >1GbE?
14:43 XATRIX Yeap
14:43 bennyturns joined #gluster
14:43 shubhendu joined #gluster
14:44 kmai007 from the client fuse logs
14:44 kmai007 from a distr/rep. 4 node gluster
14:44 vpshastry joined #gluster
14:44 XATRIX Look , some time ago, I used NTFS mounted via FUSE on my laptop
14:44 kmai007 if the client cannot find the file would that be logged as this ?   1-devstatic-client-0: remote operation failed: No such file or directory
14:44 XATRIX It also did a lot of cpu  load while copy\read file
14:45 kmai007 followed by  1-devstatic-client-1: remote operation failed: No such file or directory
14:45 vpshastry left #gluster
14:45 sheldonh XATRIX: ah. are you assuming gluster-fuse is slow because ntfs-fuse was slow, or are you measuring gluster-fuse and finding it slow?
14:47 XATRIX I'm talking about gluster-fuse
14:47 XATRIX Becasue of i never tried NTFS
14:47 XATRIX s/NTFS/NFS
14:47 XATRIX Oh, hell
14:47 XATRIX skip last 2 lines
14:47 sheldonh wow, so confused :)  i was responding to "<XATRIX> Look , some time ago, I used NTFS mounted via FUSE on my laptop" :)
14:48 XATRIX Yea...
14:50 sheldonh my understanding of fuse is that the performance benefit is about saving memory-memory copies. so it's a great fit for network filesystems, where the cost of the extra copies is dwarfed by the cost of packetization
14:51 jbrooks joined #gluster
14:52 XATRIX Ok, i'll try to inspect this out
14:54 dhyan joined #gluster
15:01 jag3773 joined #gluster
15:05 bugs_ joined #gluster
15:13 sjoeboo joined #gluster
15:20 DV joined #gluster
15:21 theron joined #gluster
15:21 wushudoin joined #gluster
15:25 kmai007 outside of logs, is there a gluster CLI report that says when the last self-heal ran?
15:29 sarkis joined #gluster
15:31 andreask joined #gluster
15:42 deepakcs joined #gluster
15:47 badone joined #gluster
15:49 theron joined #gluster
15:50 ndk joined #gluster
16:00 semiosis kmai007: self heal should be running continuously
16:01 johnbot11 joined #gluster
16:03 ndk` joined #gluster
16:05 ira joined #gluster
16:06 ira joined #gluster
16:09 gkleiman joined #gluster
16:09 gkleiman_ joined #gluster
16:09 theron joined #gluster
16:09 andreask joined #gluster
16:15 tqrst is rebalance idempotent? (if I rebalance twice with no writes in between, will the second rebalance have any effect?)
16:18 semiosis tqrst: should be
16:19 dhyan joined #gluster
16:19 semiosis things that indicate a rebalance: file renames & adding bricks
16:19 semiosis if you dont do either of those, a rebal should not be needed
16:19 tqrst yeah, just curious
16:21 tqrst hrm, rebalance is done on half of my servers and is more than halfway through on the others, yet I still have a bunch of drives at 40% vs others at 80
16:21 shubhendu joined #gluster
16:22 _Bryan_ joined #gluster
16:24 shyam joined #gluster
16:26 kmai007 semiosis, is dist./rep a design  you would chose say for storage exported via SMB and SVN committing through it?
16:26 kmai007 i'm seeing alot of directory not found and stale handle logged in my client that is exporting the SMB share
16:26 semiosis dist-rep is the right choice for just about everything, imho.
16:27 kmai007 i just see this 1-devstatic-client-0: remote operation failed: No such file or directory  and not just client 0, but 1, 2, 3
16:27 kmai007 i guess i wonder if my users are deleting the file after they create it
16:28 kmai007 or something strange
16:28 kmai007 0-devstatic-client-1: remote operation failed: Stale file handle. Path: /employees_mobile/htdocs/software/app_st​ore/unmin/scripts/services/AppService.js
16:29 semiosis any time you see 'remote operation failed' in a client log you should look for a corresponding message in the brick log, which is the other side of the remote operation
16:29 semiosis also, is that an error or just an info message?  might be nothing to worry about
16:29 kmai007 so how do i determine what is ERROR vs. INFO
16:30 kmai007 [2013-12-12 16:05:35.379332] W [client-rpc-fops.c:2624:client3_3_lookup_cbk] 0-devstatic-client-1: remote operation failed: Stale file handle. Path: /employees_mobile/htdocs/software/app_st​ore/unmin/scripts/services/AppService.js (c3f2b263-893c-482f-b258-c0f574f8f000)
16:30 kmai007 [2013-12-12 16:05:35.379332] W [client-rpc-fops.c:2624:client3_3_lookup_cbk] 0-devstatic-client-1: remote operation failed: Stale file handle. Path: /employees_mobile/htdocs/software/app_st​ore/unmin/scripts/services/AppService.js (c3f2b263-893c-482f-b258-c0f574f8f000)
16:30 kmai007 is the W = winning?
16:30 kmai007 lol
16:30 semiosis warning
16:30 semiosis E is error, I is info
16:30 semiosis D is debug....
16:30 semiosis and so on
16:30 kmai007 cool thanks i'll check the bricks
16:30 semiosis yw
16:32 jbrooks Hey guys -- major doh! moment -- I mistakenly deleted a volume... is there any way to undo that or recreate from the data in my brick?
16:33 jbrooks it's a one-brick distributed volume
16:34 thogue joined #gluster
16:35 semiosis just create a new volume with the same name & the same brick path
16:35 semiosis when you get the path or a prefix of it error read this link
16:35 glusterbot semiosis: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
16:36 jbrooks semiosis: ah yes, I know that link well
16:36 jbrooks sweet, thanks!
16:36 semiosis yw
16:36 theron_ joined #gluster
16:38 serencus joined #gluster
16:43 zerick joined #gluster
16:45 leblaaanc joined #gluster
16:45 shubhendu joined #gluster
16:53 jag3773 joined #gluster
16:54 zerick joined #gluster
16:55 Technicool joined #gluster
16:58 XATRIX ve1-ua:storage on /mnt/pve/storage type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default​_permissions,allow_other,max_read=131072)
16:58 XATRIX That's what i'm talking about
17:00 dylan_ joined #gluster
17:02 leblaaanc JoeJulian: hey same question from yesterday. Two bricks, both have had data on them but have been used as normal volumes but one is up to date. How can I sync back up the bricks?
17:05 XATRIX tqrst: http://www.zimagez.com/zimage/​screenshot-12122013-190521.php
17:05 shyam joined #gluster
17:06 leblaaanc need mor ram
17:18 dylan_ joined #gluster
17:22 brimstone joined #gluster
17:23 brimstone how do i actually use a translator?
17:24 dewey joined #gluster
17:27 dylan_ joined #gluster
17:34 Mo_ joined #gluster
17:43 semiosis brimstone: you don't.  glusterfs source code is organized into modules called translators (xlators) you don't interact directly with them, you just *use* glusterfs
17:44 semiosis for configuration options, see output of 'gluster volume set help' and if you're feeling adventurous ,,(undocumented options)
17:44 glusterbot Undocumented options for 3.4: http://www.gluster.org/community/documentat​ion/index.php/Documenting_the_undocumented
17:46 jbd1 joined #gluster
17:47 rwheeler joined #gluster
18:01 XATRIX samppah: Are you still online ?
18:01 XATRIX fused gluster very slow
18:02 XATRIX samppah: http://www.zimagez.com/zimage/​screenshot-12122013-200204.php
18:04 anands joined #gluster
18:12 SFLimey_ joined #gluster
18:14 glusterbot New news from newglusterbugs: [Bug 1041583] upstream rhs-hadoop packager needs README and new subdirectory logic. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1041583>
18:17 rotbeard joined #gluster
18:17 partner evening, probably a faq so pardon me but as you have the knowledge here.. is there anything i can do to speed up fix-layout on 3.3.2 ?
18:19 partner faster disks?-)
18:25 andreask joined #gluster
18:27 zaitcev joined #gluster
18:29 purpleidea joined #gluster
18:35 _pol joined #gluster
18:39 brimstone semiosis: thanks, i'm trying to enable the rot-13 translator, but i don't see an option for it
18:40 semiosis the what?!
18:40 brimstone hey, don't blame me, you guys put it in the source
18:48 * semiosis did no such thing
18:48 semiosis <-- ,,(volunteer)
18:48 semiosis heh
18:48 semiosis not a developer
18:48 glusterbot A person who voluntarily undertakes or expresses a willingness to undertake a service: as one who renders a service or takes part in a transaction while having no legal concern or interest or receiving valuable consideration.
18:51 badone joined #gluster
18:52 brimstone i created a volume following the quick start using  "gluster volume create". Is there a way to export this to a profile file?
18:52 semiosis not exactly
18:52 semiosis what is your goal?
18:52 brimstone to enable the rot-13 translator :)
18:54 semiosis if that is really your goal, then me doing it for you would seem to take all the fun out of it
18:54 brimstone probably
18:54 semiosis however if your trying to do that toward some other end, then maybe i can help you get there
18:55 brimstone i guess i should write my own volume profile by hand?
18:55 brimstone my end goal is to write and submit a new translator
18:55 brimstone but i need to first understand how the translators work
18:56 semiosis have you read jdarcy's guides?
18:56 brimstone i have not found them yet to read
18:58 semiosis probably a good place to start is here: http://www.gluster.org/community/documentation/ind​ex.php/Arch/A_Newbie's_Guide_to_Gluster_Internals
18:58 glusterbot Title: Arch/A Newbie's Guide to Gluster Internals - GlusterDocumentation (at www.gluster.org)
18:58 semiosis good luck
18:58 kmai007 DHT mismatching layouts / anomalies , is there any action I should take when i see this from the client logs ?
18:59 kmai007 I [dht-layout.c:630:dht_layout_normalize] 1-devstatic-dht: found anomalies in /employees/htdocs/tos/vmu/​secure/app/.svn/prop-base. holes=1 overlaps=0
19:00 failshell joined #gluster
19:01 brimstone semiosis: oh, excellent, thanks!
19:01 kmai007 so for the path in question, i see double directories/double files
19:01 kmai007 from the client
19:01 kmai007 while the bricks only show 1 unique directory, and not double
19:05 kmai007 and now its fine
19:05 kmai007 i guess patience is a virtue
19:05 dbruhn If you keep trying you will probably see it again
19:05 kmai007 fine = cleaned up 1 unique listing
19:05 kmai007 no
19:05 kmai007 i will type while my eyes are closed then
19:07 kmai007 strange, so it must be the developer that has now removed all the files
19:09 kmai007 ok now that the fire is put out, are there any doc. or practices for client vol files?
19:09 kmai007 i guess i don't want to mess with that
19:10 kmai007 i was thinking how can i speed up the WAIT time if  a brick was offline, and the client has it in their fstab to fetch the vol file from it, but cant
19:10 kmai007 and force it to fetch it from a 2nd brick
19:12 partner use round robin for vol file servers
19:12 partner not related but i do see same log entries on my distributed volume, probably due to having several bricks and not everything is in balance
19:13 johnbot11 joined #gluster
19:14 leblaaanc joined #gluster
19:14 hagarth brimstone: feel free to hit gluster-devel if you need help with code
19:20 brimstone hagarth: will do, thanks
19:26 bdperkin_gone joined #gluster
19:34 anands joined #gluster
19:35 badone joined #gluster
19:47 Technicool joined #gluster
19:56 kmai007 i'm not sure how this can be
19:56 kmai007 [2013-12-12 18:46:59.923757] E [afr-common.c:3735:afr_notify] 0-devstatic-replicate-1: All subvolumes are down. Going offline until atleast
19:56 kmai007 all the nodes show connected when i peer staus them
20:00 semiosis kmai007: what version of glusterfs?
20:00 kmai007 3.4.1-3
20:00 kmai007 http://ur1.ca/g6h1v
20:00 glusterbot Title: #61272 Fedora Project Pastebin (at ur1.ca)
20:01 semiosis make a new client mount, check its log.  if it connects ok, then unmount & remount the problematic client
20:01 semiosis iptables, maybe?
20:01 kmai007 accept all
20:01 kmai007 here is the gluster log
20:01 kmai007 http://fpaste.org/61273/86878476/
20:01 glusterbot Title: #61273 Fedora Project Pastebin (at fpaste.org)
20:02 semiosis idk what log that is from, and those three lines mean very little to me
20:02 semiosis if you're going to pastie logs, then pastie the logs, not just a couple lines
20:02 semiosis afk, lunchtime
20:03 kmai007 ok
20:03 kmai007 where is the new mount on the client for the same volume
20:03 kmai007 http://ur1.ca/g6h24
20:03 glusterbot Title: #61274 Fedora Project Pastebin (at ur1.ca)
20:05 kmai007 here is 1000 lines of the client log that issued the disconnect
20:05 kmai007 http://fpaste.org/61276/87869913/
20:05 glusterbot Title: #61276 Fedora Project Pastebin (at fpaste.org)
20:08 semiosis possibly some kind of NAT issue?
20:08 semiosis or connection time limit?
20:08 semiosis i notice you're using public IPs, so if the gluster traffic is going across the net, not just a lan, many weird things are possible
20:08 semiosis ok afk for real now
20:11 dneary joined #gluster
20:15 cfeller joined #gluster
20:16 zerick joined #gluster
20:17 rotbeard joined #gluster
20:17 bdperkin_gone joined #gluster
20:18 bdperkin joined #gluster
20:19 olivier joined #gluster
20:19 olivier hello
20:19 glusterbot olivier: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
20:21 daMaestro joined #gluster
20:24 olivier hi, i have a cluster of 4 gluster servers, 2x10Gb eth each, on a 10Gb sw. full optics. servers are dell powervault. 10000rpm disks. mounting the glusterfs client, copying some files (cp, or dd) i got smt like 350MB/s. and when using a front mounting glusterfs under smb i got smt like 150-200MB/s. are those coherent results, if only waited results?
20:31 ninkotech_ joined #gluster
20:32 PatNarciso joined #gluster
20:33 PatNarciso Hey fellas.  Long time no chat.
20:34 PatNarciso In a Replicated Distributed Gluster; "The number of bricks should be a multiple of the replica count for a distributed replicated volume.".  I understand this is for the greatest redundancy.  Is there anything wrong with the number of bricks being greater than the replica?  Ex, 3 bricks with a replica of 2?
20:35 semiosis PatNarciso: that is impossible
20:36 PatNarciso I'm good at coming up with impossible situations.  So;  If I had a single volume, 2 server 2 brick 2 replica distributed, and wanted to expand to another server.  I'd have to add two more servers?
20:37 semiosis it's required that the number of bricks be a multiple of the replica count
20:37 semiosis it's strongly recommended (though not absolutely required) for the number of servers to also be a multiple of the replica count
20:38 andreask joined #gluster
20:38 PatNarciso dang.
20:38 semiosis you could, of course, put more than one brick on a server
20:39 semiosis my rule of thumb is this: add bricks (or expand existing bricks) to add capacity, add servers to add performance
20:41 PatNarciso in a single server gluster setup -- where replication is ideal (for hardware failure prevention), and assuming I hate mdadm -- two bricks (two physical disks) would be required for a replication.
20:42 semiosis imho, there is no point to a single server gluster setup (except if you're developing gluster)
20:42 semiosis use lvm if you hate mdadm
20:42 PatNarciso AND if I wanted to expand the capacity, I would be required to add a third AND fourth brick.
20:43 semiosis or you could just expand the bricks themselves, say with lvm
20:43 semiosis but then why not just use lvm, and skip gluster altogether
20:44 PatNarciso I must retire another server before making it the second gluster server... it's a "we don't have money for another server" thing.
20:46 semiosis how much $ worth of your time will be spent to avoid buying a server?
20:46 semiosis (rhetorical question)
20:47 semiosis well maybe it's a really expensive server
20:47 PatNarciso nah.  it's not.
20:47 PatNarciso you make a valid point (always do sir).
20:47 semiosis ha
20:47 JoeJulian My answer to "we don't have money to do this adequately" is, "Ok, let's table this until we're ready."
20:47 semiosis +1
20:48 PatNarciso alright fellas.  seems an amazon order is in my future.
20:48 JoeJulian Woot!
20:48 JoeJulian New toys!
20:50 PatNarciso bbiaf, gotta take the cousin to the airport.   thanks guys.
20:50 semiosis yw ttyl
21:12 _pol joined #gluster
21:31 _pol joined #gluster
21:47 psyl0n joined #gluster
21:49 psyl0n joined #gluster
21:57 ninkotech_ joined #gluster
22:04 dbruhn__ joined #gluster
22:17 jbrooks Is it possible to have a distributed-replicated volume across 3 nodes with replica 2?
22:17 semiosis jbrooks: possible, but not recommended
22:18 semiosis best practice is to have servers a multiple of your replica count
22:18 ninkotech_ joined #gluster
22:19 jbrooks semiosis: Ok, thanks. I'm using 3 servers in my lab right now, I want replica 2, and I figured I'd spread it across the three
22:22 khushildep joined #gluster
22:23 gdubreui joined #gluster
22:23 jbrooks semiosis: I suppose, though, that there might be some performance benefit to replica 3, just no added capacity
22:24 semiosis quorum is the main benefit imho
22:24 jbrooks yeah
22:24 semiosis i suppose it's possible that you could have better read performance with an extra replica
22:24 jbrooks If I add a fourth node in the future, is it a pain to drop back to replica 2?
22:24 semiosis idk how painful that is
22:24 jbrooks from 3
22:25 jbrooks OK, it's in a lab, anyway
22:25 jbrooks So some pain isn't the end of the world
22:25 jbrooks :)
22:25 semiosis let me know how that goes :)
22:26 jbrooks :)
22:28 bgpepi joined #gluster
22:30 ninkotech_ joined #gluster
22:36 mvm joined #gluster
22:39 mvm left #gluster
22:53 ninkotech__ joined #gluster
22:54 kmai007 joined #gluster
22:54 kmai007 semosis
22:54 kmai007 i think i was able to track down my issue from earlier
22:54 semiosis ??
22:54 kmai007 https://bugzilla.redhat.com/process_bug.cgi
22:54 glusterbot Title: Log in to Red Hat Bugzilla (at bugzilla.redhat.com)
22:54 kmai007 the disconnect that was logged
22:54 yinyin joined #gluster
22:55 kmai007 and i gave you like 3 lines of log messages
22:55 kmai007 remember?
22:55 kmai007 i was changing a volume setting,
22:57 semiosis orly
22:57 kmai007 come again?
22:57 ricky-ti1 joined #gluster
22:59 semiosis oh really?
23:05 pdrakeweb joined #gluster
23:10 dbruhn joined #gluster
23:19 ninkotech_ joined #gluster
23:23 theron joined #gluster
23:25 theron joined #gluster
23:26 TvL2386 joined #gluster
23:27 ninkotech__ joined #gluster
23:28 theron joined #gluster
23:34 ninkotech__ joined #gluster
23:42 dylan_ joined #gluster
23:47 ninkotech__ joined #gluster
23:47 PatNarciso so -- hmm.  can anyone recommend a good bit-level drive data recovery tool?
23:48 badone joined #gluster
23:54 TrDS joined #gluster
23:58 ninkotech__ joined #gluster
23:59 ninkotech joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary