Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 elico left #gluster
00:08 JoseBravo joined #gluster
00:09 B21956 joined #gluster
00:09 JoseBravo I installed a glusterfs and I was testing it, but one of the nodes is not online. I don't know whay it get discconected and how I put it back online
00:10 JoseBravo /etc/init.d/glusterfsd status: glusterfsd is stopped
00:11 JoseBravo And I try to start it, but it do not start
00:13 Pavid7 joined #gluster
00:16 yinyin_ joined #gluster
00:21 jbd1 JoseBravo: check your system logs
00:25 JoseBravo dmesg or /var/log/messages didn't show me nothing
00:26 jbd1 do you see /var/log/glusterfs
00:27 JoseBravo There are many files/folders, I don't know witch one see
00:28 jbd1 JoseBravo: Does the folder /var/log/glusterfs exist?
00:28 JoseBravo I checked all, but apparently some daemon was trying to connect to a not listening por
00:28 JoseBravo Yes
00:28 JoseBravo *port
00:29 JoseBravo I rebooted the server and it's online now
00:29 jbd1 JoseBravo: good job
00:29 JoseBravo How can I check the sanity of a brick?
00:30 JoseBravo I mean, to see if both gluster servers has the same data?
00:30 jbd1 JoseBravo: gluster volume status <volume-name> will verify that your brick has rejoined
00:30 jbd1 JoseBravo: gluster will automatically heal the brick that rebooted if any files changed while it was offline.
00:30 JoseBravo And there is any way to see if it's doing that?
00:31 jbd1 JoseBravo: /var/log/glusterfs/glustershd.log
00:32 JoseBravo Ok, another question... Is there any good script to monitore the sanity?
00:32 JoseBravo I mean all peers are online
00:33 jbd1 JoseBravo: I wrote my own for nagios.  What you need depends on your monitoring software.
00:35 edong23 joined #gluster
00:35 jag3773 joined #gluster
00:42 JoseBravo jdb1 I just want a email alert if something happen
00:43 elico joined #gluster
00:47 nueces joined #gluster
00:55 jbd1 JoseBravo: you'll need to write your own script to do it in that case.
00:55 tjikkun_work joined #gluster
00:59 JoseBravo jbd1 now is the other gluster server that is down
01:05 JoseBravo The file /var/log/glusterfs/bricks/export-backups-brick.log  is there, looks like something killed the glusterfsd http://fpaste.org/96837/98387868/
01:05 glusterbot Title: #96837 Fedora Project Pastebin (at fpaste.org)
01:08 JoseBravo joined #gluster
01:09 yinyin_ joined #gluster
01:20 Andy5_ joined #gluster
01:34 glusterbot New news from newglusterbugs: [Bug 1091133] Glusterfsd daemon crash <https://bugzilla.redhat.com/show_bug.cgi?id=1091133>
01:35 6JTAAF6H4 joined #gluster
01:47 vpshastry joined #gluster
01:54 gmcwhistler joined #gluster
02:08 ykim joined #gluster
02:11 vpshastry joined #gluster
02:15 yinyin_ joined #gluster
02:17 Matthaeus joined #gluster
02:22 ninkotech joined #gluster
02:23 tdasilva left #gluster
02:26 bala joined #gluster
02:27 RameshN joined #gluster
02:28 yosafbridge joined #gluster
02:29 JonathanD joined #gluster
02:29 Amanda joined #gluster
02:30 harish_ joined #gluster
02:37 jag3773 joined #gluster
02:43 haomaiwa_ joined #gluster
02:47 ninkotech joined #gluster
03:11 shubhendu joined #gluster
03:16 jbrooks joined #gluster
03:18 ajha joined #gluster
03:32 nishanth joined #gluster
03:32 nthomas joined #gluster
03:35 rypervenche joined #gluster
03:36 rypervenche Hi all. I have a quick question. If I set up glusterfs on two cloud servers, would it be best to mount each glusterfs volume locally on each server? If I understood correctly I basically hae the choice from which server I wish to mount the volume?
03:38 Alex You have the choice of which server you use for the mount, but it'll still talk to both bricks when accessing files.
03:40 rypervenche Alex: Awesome. Thank you. This was stupid simple to set up. I'm reading the full guide now.
03:49 kanagaraj joined #gluster
03:51 purpleidea can someone run: facter -p | grep -i oper | fpaste please?
03:51 purpleidea on a *debian* machine please
03:51 purpleidea err.. and instead of fpaste, just paste the output somewhere (even here)
03:53 itisravi joined #gluster
03:54 purpleidea rypervenche: you mount the volume where you want to use it. typically you don't mount it on your servers.
03:55 rypervenche purpleidea: Yeah, I figured out that you would normally run this on a separate server. I'm just thinking for people who have a minimal two cloud server set up, it could be a good way to keep data in sync, such as for web server content.
03:56 purpleidea rypervenche: as long as you understand what you're doing!
03:56 purpleidea rypervenche: do you have access to a debian machine?
03:56 rypervenche purpleidea: I do. Let me run it for you.
03:56 purpleidea thanks
03:56 bharata-rao joined #gluster
03:57 purpleidea i'm porting puppet-gluster to other platforms... you should check it out (plug)
03:57 rypervenche If you're porting, you should go Gentoo ~_^
03:57 purpleidea rypervenche: then run this on a gentoo machine too!
03:58 purpleidea but you'll be pleased to know i'm porting this in such a way that you can "drop" in a distro-specific yaml file, and suddenly it should work for you (or it's a bug)
03:58 rypervenche I'm assuming fpaste is a command to pastebin it?
03:58 purpleidea rypervenche: yeah. any paste or even just do it here (this once)
04:00 rypervenche http://paste.debian.net/95589
04:00 glusterbot Title: debian Pastezone (at paste.debian.net)
04:00 purpleidea rypervenche: thanks!
04:00 Humble joined #gluster
04:01 purpleidea rypervenche: can you confirm that facter -p | grep osfamily
04:01 purpleidea is 'Debian' ?
04:01 rypervenche It is.
04:02 rypervenche purpleidea: For Gentoo: https://k.ryp.io/view/b1ee92f0
04:02 glusterbot Title: facter - Ryper's paste (at k.ryp.io)
04:03 purpleidea rypervenche: sounds good!
04:03 harish joined #gluster
04:09 bala joined #gluster
04:10 sputnik13 joined #gluster
04:11 sputnik13 joined #gluster
04:13 haomai___ joined #gluster
04:15 deepakcs joined #gluster
04:31 sputnik13 joined #gluster
04:33 kumar joined #gluster
04:36 sputnik13 joined #gluster
04:42 bala joined #gluster
04:44 hatsari joined #gluster
04:45 Pavid7 joined #gluster
04:47 sputnik13 joined #gluster
04:50 kasturi joined #gluster
04:53 hatsari joined #gluster
04:54 atinmu joined #gluster
04:56 sputnik13 joined #gluster
04:56 ravindran1 joined #gluster
04:57 hatsari joined #gluster
05:02 rjoseph joined #gluster
05:02 ppai joined #gluster
05:04 Philambdo joined #gluster
05:10 kanagaraj joined #gluster
05:11 lalatenduM joined #gluster
05:15 kanagaraj joined #gluster
05:16 hagarth joined #gluster
05:26 a2 joined #gluster
05:27 davinder joined #gluster
05:28 a2 joined #gluster
05:28 Bardack joined #gluster
05:29 mohan_ joined #gluster
05:31 Honghui_ joined #gluster
05:31 prasanthp joined #gluster
05:33 jbrooks joined #gluster
05:34 Matthaeus joined #gluster
05:35 glusterbot New news from newglusterbugs: [Bug 1086760] Add documentation for the Feature: Write Once Read Many (WORM) volume <https://bugzilla.redhat.com/show_bug.cgi?id=1086760>
05:36 rahulcs joined #gluster
05:37 deepakcs joined #gluster
05:44 kdhananjay joined #gluster
05:47 davinder2 joined #gluster
05:48 haomaiwa_ joined #gluster
05:52 ppai joined #gluster
05:53 pete29m joined #gluster
05:56 twx joined #gluster
06:10 aravindavk joined #gluster
06:11 vpshastry joined #gluster
06:22 bala joined #gluster
06:37 social joined #gluster
06:39 dusmant joined #gluster
06:39 rahulcs joined #gluster
06:39 aravindavk joined #gluster
06:41 psharma joined #gluster
06:43 ngoswami joined #gluster
06:46 nshaikh joined #gluster
06:47 Ylann joined #gluster
06:48 Arrfab joined #gluster
06:49 mohan_ joined #gluster
06:57 ekuric joined #gluster
06:58 XpineX joined #gluster
06:58 Andy5_ joined #gluster
06:58 Philambdo joined #gluster
06:59 ctria joined #gluster
07:02 ktosiek joined #gluster
07:03 eseyman joined #gluster
07:05 rastar joined #gluster
07:06 Philambdo joined #gluster
07:10 davinder joined #gluster
07:20 haomai___ joined #gluster
07:30 mohan_ joined #gluster
07:34 edward2 joined #gluster
07:45 ThatGraemeGuy joined #gluster
07:45 Ylann joined #gluster
07:45 dusmant joined #gluster
07:51 ppai joined #gluster
07:58 RameshN_ joined #gluster
08:01 rahulcs_ joined #gluster
08:02 Andy5_ joined #gluster
08:03 liquidat joined #gluster
08:05 glusterbot New news from newglusterbugs: [Bug 1089172] MacOSX/Darwin port <https://bugzilla.redhat.com/show_bug.cgi?id=1089172>
08:08 andreask joined #gluster
08:09 rbw joined #gluster
08:11 ricky-ti1 joined #gluster
08:19 itisravi joined #gluster
08:25 Honghui_ joined #gluster
08:38 Honghui__ joined #gluster
08:40 ppai joined #gluster
08:40 rtalur_ joined #gluster
08:44 rahulcs joined #gluster
08:51 mohan_ joined #gluster
08:52 rjoseph joined #gluster
08:55 Durzo joined #gluster
08:59 vimal joined #gluster
09:01 saravanakumar1 joined #gluster
09:04 ppai joined #gluster
09:24 davinder2 joined #gluster
09:24 deepakcs joined #gluster
09:34 rahulcs joined #gluster
09:36 davinder joined #gluster
09:39 Chewi joined #gluster
09:40 aravindavk joined #gluster
09:49 rahulcs joined #gluster
09:50 haomaiwang joined #gluster
09:54 davinder2 joined #gluster
09:54 Honghui_ joined #gluster
09:56 purpleidea semiosis: you around? i'm finishing off my puppet-gluster porting... w00t! you're going to love the implementation...
09:56 purpleidea semiosis: anyways, the question is, i'm trying to identify any places where i have hardcoded an os specific path or string.
09:57 purpleidea semiosis: if you've got an ubuntu/debian box that i can ssh into for a few hours, that would be sweet...
09:57 purpleidea semiosis: if you have a list of things that were wrong, that's sweet too...
09:58 purpleidea semiosis: i'm guessing /var/lib/glusterd/ is different. also is glusterd service called 'glusterd' ? or is it different.
10:00 purpleidea semiosis: current WIP untested branch is: https://github.com/purpleidea/puppet-gluster/tree/feat/yamldata the cool thing is that once the code is done, you just "drop in" a yaml file to data/ directory, and it's ported!
10:00 glusterbot Title: purpleidea/puppet-gluster at feat/yamldata · GitHub (at github.com)
10:05 edward2 joined #gluster
10:05 purpleidea (also: anyone who has gluster running on debian/ubuntu/fedora who can give me a read only shell on their box for a few hours, ping me, and puppet-gluster might work on your distro!)
10:06 glusterbot New news from newglusterbugs: [Bug 1089216] Meta translator <https://bugzilla.redhat.com/show_bug.cgi?id=1089216>
10:06 kanagaraj joined #gluster
10:08 rahulcs joined #gluster
10:10 Chewi joined #gluster
10:10 hagarth joined #gluster
10:16 meghanam joined #gluster
10:16 meghanam_ joined #gluster
10:24 foster joined #gluster
10:25 basso joined #gluster
10:26 andreask joined #gluster
10:31 bala joined #gluster
10:31 ravindran1 joined #gluster
10:36 Slashman joined #gluster
10:43 liquidat joined #gluster
10:43 kanagaraj joined #gluster
10:50 foster joined #gluster
10:56 saurabh joined #gluster
11:02 yinyin joined #gluster
11:07 Andy5_ joined #gluster
11:09 rahulcs joined #gluster
11:16 jiffe98 joined #gluster
11:27 andreask joined #gluster
11:33 rahulcs joined #gluster
11:36 surabhi joined #gluster
11:37 foster joined #gluster
11:38 rjoseph joined #gluster
11:40 bala1 joined #gluster
11:44 rahulcs joined #gluster
11:48 foster_ joined #gluster
11:53 hug joined #gluster
11:53 hug hi all
11:54 hug does someone know where I can find a tool to calculate the usable volume space created by a farm of gluster server ?
11:54 glusterbot New news from resolvedglusterbugs: [Bug 1049981] 3.5.0 Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1049981>
12:09 kanagaraj joined #gluster
12:15 harish joined #gluster
12:15 tjikkun_work joined #gluster
12:15 rjoseph joined #gluster
12:17 psharma joined #gluster
12:21 yinyin joined #gluster
12:28 Humble joined #gluster
12:30 foster joined #gluster
12:32 kanagaraj joined #gluster
12:32 sroy joined #gluster
12:36 jmarley joined #gluster
12:36 jmarley joined #gluster
12:45 jobewan joined #gluster
12:45 tdasilva joined #gluster
12:47 JoseBravoHome joined #gluster
12:51 JoseBravoHome I have installed in the version 3.5.0-2 and  I have two replicas working. Each server and the client has two 2x1Gbit LACP bonding. And when I do "dd if=/dev/zero of=testfile5.bin bs=100M count=10" into a fuse mount from the client, it kills the glusterfsd of one of my replicas.
12:51 JoseBravoHome I pasted my bricks log here: http://fpaste.org/96837/98387868/
12:51 glusterbot Title: #96837 Fedora Project Pastebin (at fpaste.org)
12:54 tru_tru joined #gluster
12:55 Ark joined #gluster
12:56 bfoster joined #gluster
12:57 bennyturns joined #gluster
13:02 jmarley joined #gluster
13:02 jmarley joined #gluster
13:06 glusterbot New news from newglusterbugs: [Bug 1091372] Behaviour of glfs_fini() affecting QEMU <https://bugzilla.redhat.com/show_bug.cgi?id=1091372>
13:09 giannello joined #gluster
13:10 diegows joined #gluster
13:11 rahulcs joined #gluster
13:13 dusmant joined #gluster
13:14 suliba_ joined #gluster
13:14 seddrone_ joined #gluster
13:17 rahulcs joined #gluster
13:21 rjoseph joined #gluster
13:25 mohan_ joined #gluster
13:26 bala joined #gluster
13:28 mjsmith2 joined #gluster
13:28 rahulcs joined #gluster
13:39 rahulcs joined #gluster
13:39 rahulcs joined #gluster
13:42 tziOm joined #gluster
13:46 japuzzo joined #gluster
13:49 Ark joined #gluster
13:53 Andy5_ joined #gluster
13:55 ctria joined #gluster
13:58 mohan_ joined #gluster
13:58 rahulcs joined #gluster
14:01 basso joined #gluster
14:07 rahulcs joined #gluster
14:08 rahulcs joined #gluster
14:10 edward2 joined #gluster
14:10 Chewi hello all. I'm new to this but have successfully tried geo-rep on 3.5.0. I've seen the new tar+ssh feature, looks good, but nothing has been said about the tar_ssh.pem file that the config references. why is a separate key needed? does it not use gsyncd on the other end? what command should I lock it down to in authorized_keys? (bug #1091079 notwithstanding)
14:10 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1091079 unspecified, unspecified, ---, csaba, NEW , Testing_Passwordless_SSH check in gverify.sh conflicts with documentation
14:13 jbrooks joined #gluster
14:17 liquidat joined #gluster
14:31 hug left #gluster
14:37 dbruhn joined #gluster
14:38 gmcwhistler joined #gluster
14:40 scuttle_ joined #gluster
14:41 dblack joined #gluster
14:47 John_HPC joined #gluster
14:48 gmcwhist_ joined #gluster
14:50 shubhendu joined #gluster
14:50 davinder joined #gluster
14:51 foster joined #gluster
14:54 jbrooks joined #gluster
14:59 giannello joined #gluster
14:59 sroy__ joined #gluster
15:02 jag3773 joined #gluster
15:06 kanagaraj joined #gluster
15:10 steved_ joined #gluster
15:11 steved_ I've got a failure state upgrading a replica 2 cluster from 3.4.2 to 3.5.0.2. If the gluster daemon is started on either node I can check volume status. Once I start glusterd on both nodes, volume status and peer status give no output
15:12 steved_ I have some errors in the log on daemon start
15:12 steved_ [2014-04-25 14:59:12.026248] I [glusterd-store.c:1421:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 2 [2014-04-25 14:59:12.059439] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0 [2014-04-25 14:59:12.059479] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1
15:14 steved_ [glusterd-utils.c:1499:glusterd_brick_unlink_socket_file] 0-management: Failed to remove /var/run/0b9e6d7d319b0f50e386dcb3973941ac.socket error: No such file or directory [2014-04-25 15:04:52.177688] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
15:18 rwheeler_ joined #gluster
15:19 JoseBravoHome joined #gluster
15:19 lmickh joined #gluster
15:20 rwheeler_ joined #gluster
15:25 ktosiek joined #gluster
15:28 semiosis as Durzo pointed out, my qemu ppa is outdated. i'll update tonight
15:29 semiosis monotek has a qemu ppa also, btw
15:37 JoeJulian purpleidea: don't you have access to the rackspace account?
15:39 daMaestro joined #gluster
15:40 JoeJulian JoseBravo/JoseBravoHome: Please file a bug report for that.
15:40 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
15:46 basso joined #gluster
15:46 tjikkun_work joined #gluster
15:46 Arrfab joined #gluster
15:46 JoeJulian semiosis: I spoke with Pat Gaughen, manager of Ubuntu Server and Openstack team at Canonical, last night about helping push your packages into Ubuntu Server main for the upcoming unicorn release, and Universe for LTS releases. She's just starting a vacation but will pop in after she gets back.
15:46 JoseBravoHome JoeJulian I filed the bug yesterday: https://bugzilla.redhat.com/show_bug.cgi?id=1091133
15:46 glusterbot Bug 1091133: urgent, unspecified, ---, rwheeler, NEW , Glusterfsd daemon crash
15:46 semiosis http://i.imgur.com/WPkj5TQ.jpg
15:47 basso joined #gluster
15:47 semiosis JoeJulian: neat!
15:47 JoseBravoHome JoeJulian the LACP bunding can be causing the problem?
15:47 semiosis JoeJulian: now that we have another 2 years it should be easy
15:47 JoeJulian hehe
15:48 JoeJulian JoseBravoHome: Could be, that often seems to produce headaches,  but it still shouldn't crash.
15:50 Honghui joined #gluster
15:51 JoeJulian semiosis: I asked her, in front of a room full of people, that since she's included Icehouse (the latest OpenStack) in 14.04 and that has strong support for GlusterFS in Cinder and Swift, why is GlusterFS missing from that release? I had quite a few agreeing nods and +1's.
15:51 * JoeJulian is a troublemaker. :D
15:54 JoeJulian And they can't blame it on code freeze. Icehouse was in at the last possible second.
15:56 kkeithley and what did she say by way of an answer?
15:57 kkeithley or she dodged it, and is dodging it until she gets back from vaca?
15:57 ProT-0-TypE joined #gluster
16:00 mohan_ joined #gluster
16:00 giannello joined #gluster
16:00 semiosis kkeithley: bug 1086460 shows the official reason why glusterfs isn't in ubuntu main
16:00 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1086460 unspecified, unspecified, ---, rwheeler, NEW , Ubuntu code audit results (blocking inclusion in Ubuntu Main repo)
16:02 JoeJulian kkeithley: Asked me to email her to remind her. I did so. She said she'll help after vaca.
16:04 JoeJulian I wonder if cppcheck should be added to Jenkins?
16:06 vpshastry joined #gluster
16:10 vpshastry joined #gluster
16:11 Andy5_ joined #gluster
16:14 cvdyoung joined #gluster
16:15 JoseBravoHome I disabled the bonding, and did a 10G transfer from a client fuse mount: dd if=/dev/zero of=testfile9.bin bs=100M count=100 but it stop transmiting in 6.3G and in the client side I get a Transport endpoint is not connected
16:15 JoseBravoHome dd: writing `testfile9.bin': Transport endpoint is not connected dd: closing output file `testfile9.bin': Transport endpoint is not connected
16:17 semiosis JoseBravoHome: check the client log file
16:18 Peanut joined #gluster
16:18 JoeJulian ttfn. I've got to head down to Seattle.
16:18 Mo_ joined #gluster
16:18 dbruhn JoseBravoHome, I've usually seen that as a result from a network hiccup.
16:21 sputnik13 joined #gluster
16:23 cvdyoung Hello, I have installed glusterfs-server onto a single server with a dedicated LSI card.  It's set for RAID6 with 18 4TB drives, Mellanox IB ConnectX cards, and Scientific Linux 6.4 installed.  When I run a speed test locally of write/read speed I am seeing decent performance.  When I mount the brick using glusterfs locally to another point, I notice a slight degradation in performance.  But, there's a huge performance hit when mounting remotely using the
16:24 dbruhn cvdyoung, single server?
16:25 cvdyoung yessir, single server with 2 procs, 6 cores intel x86_64
16:25 cvdyoung It's not in production yet, we're experimenting with gluster.
16:25 dbruhn Ahh ok
16:26 dbruhn What kind of hit are you seeing?
16:26 cvdyoung Yeah, so I can make any changes to it without causing any pain.  I just have no idea where to start  ;)
16:26 dbruhn Understood
16:26 cvdyoung Our company creates very large files, lots of them in our job runs
16:27 dbruhn I am assuming your initial statement was going to end with "mounting remotely using the gluster native client"?
16:27 cvdyoung yes, exactly.  Mounted over our infiniband network
16:28 dbruhn So what is your performance in each case
16:29 dbruhn and are you using RDMA or TCP?
16:29 dbruhn Also, how many bricks did you setup when you created the volume
16:29 zerick joined #gluster
16:30 cvdyoung Locally the single brick is located at /data/brick, and mounted via gluster to /home.  We are seeing 1.0G write, and 475MB read.  TCP is used.
16:30 cvdyoung remotely we see 125MB for both write and read
16:31 dbruhn If you do a similar test with say NFS do you see something similar?
16:33 vpshastry left #gluster
16:34 hagarth joined #gluster
16:36 LoudNoises joined #gluster
16:38 shubhendu joined #gluster
16:38 [o__o] joined #gluster
16:40 dblack joined #gluster
16:45 ndk joined #gluster
16:45 theron joined #gluster
16:48 Gilbs joined #gluster
16:53 jag3773 joined #gluster
16:54 JoseBravoHome After the "Transport endpoint is not connected" error all gluster servers are discconnected. gluster volume status brick show that both servers are not online
16:54 JoseBravoHome And /etc/init.d/glusterfsd status reports glusterfsd is stopped
16:57 JoseBravoHome dbruhn why happen a network hiccup ?
17:00 dbruhn JoseBravoHome, I was having issues on Redhat where Network Manager would try and grab a DHCP address on a network card that was statically set, and it would cause a short timeout, and the client would disconnect.
17:01 kkeithley JoeJulian: I'll try running cppcheck to see how long it takes. I'm setting up a bunch of machines to do things like routinely run things like coverity, cppcheck, valgrind, etc. If cppcheck takes to long perhaps we just want a daily run instead of a run every time someone commits
17:01 dbruhn So it sounds unrelated to what you are experiencing
17:02 kkeithley semiosis: it's kinda disappointing that the person who ran cppcheck didn't open a bug themselves.
17:04 Gilbs Does 3.5 still have memory leak issues?
17:04 mohan_ joined #gluster
17:07 glusterbot New news from newglusterbugs: [Bug 1086460] Ubuntu code audit results (blocking inclusion in Ubuntu Main repo) <https://bugzilla.redhat.com/show_bug.cgi?id=1086460>
17:08 kkeithley several memory leaks were fixed in 3.4.3 and 3.5.0. Give it a try and see.
17:09 gmcwhistler joined #gluster
17:10 yinyin joined #gluster
17:12 Gilbs I'm on 3.5 now and am still having memory issues.   I run a security/port scan and 16G are eaten up in 30 seconds before the oom-killer does his thing.
17:21 JoeJulian Gilbs: tell me more about this security/port scan and how it interacts with GlusterFS.
17:22 SFLimey joined #gluster
17:25 saravanakumar joined #gluster
17:25 theron joined #gluster
17:28 Matthaeus joined #gluster
17:35 kanagaraj_ joined #gluster
17:48 Licenser joined #gluster
17:51 brokeasshachi joined #gluster
17:51 wushudoin joined #gluster
17:52 cvdyoung dbruhn:  Just finished testing the NFS client remotely:  Writes are 497MB/s and reads are 248MB/s using 42G file size
17:55 zaitcev joined #gluster
18:00 JoseBravoHome I removed the options: performance.cache-size: 1GB, performance.write-behind-window-size: 512MB, perrformance.stat-prefetch: 1, performance.cache-refresh-timeout: 1, performance.read-ahead: off and apparently the problem is gone.
18:00 JoseBravoHome Witch one shlould be causing the problem?
18:02 pietschee joined #gluster
18:02 pietschee if somebody is interested in glusterfs support for qemu 2.0 in ubuntu trusy check out this ppa: https://launchpad.net/~monotek/+archive/qemu-glusterfs
18:02 glusterbot Title: qemu-glusterfs : André Bauer (at launchpad.net)
18:13 yinyin joined #gluster
18:15 deeville joined #gluster
18:21 Mneumonik joined #gluster
18:24 Mneumonik Hey all, tryng to get a 6 node cluster going, with windows nfs client (sucks, i know). I've googled the hell out of permissions issues but there doesnt seem to be any answers... I can mount nfs on linux but in windows on some nodes i get a "cant access" error, when i connect on other nodes I get an empty volume when there are files i have written on the linux nfs client. Any ideas?
18:25 Mneumonik Oh, and some give an access error with "the parameter is incorrect"
18:25 _dist joined #gluster
18:25 kkeithley Mneumonik: protocol=3?
18:26 Mneumonik i just used gluster volume create gv0 replica 2, how would i try protocol=3?
18:27 jbd1 Mneumonik: kkeithley is asking about windows client-side NFS mount options
18:27 kkeithley gluster NFS servers only do NFSv3.  Are you clients using -o proto=3
18:27 kkeithley Are your clients
18:27 jbd1 Mneumonik: if you're able to mount from linux clients, but not on windows clients, the key to resolving the issue is to inspect the options you're using to mount the volume on the client side
18:27 Mneumonik not sure, it's just the windows server 2008 nfs client, would i find that in the registry?
18:30 kanagaraj joined #gluster
18:38 theron_ joined #gluster
18:42 theron joined #gluster
18:44 Gilbs JoeJulian:  Sorry, I was pulled into a meeting.  We are conducting our PCI audits and are running internal security scans, both with OpenVas and a professional company.  These as far as I know are port scans and bogus login attempts.  In top, Glusterfs is only using .7 cpu and .4 memory, but once the scan starts it jumps to 100% and memory is used until oom-killer kills glusterfs and glusterd.
18:44 dbruhn Mneumonik, probably not a ton of windows centric admins in here.
18:45 [o__o] joined #gluster
18:46 dbruhn cvdyoung, that's interesting. It might be a good idea to put in a bug report/feature request around performance. One thing to keep in mind, and I am assuming a bit here, is that gluster is built to scale, and provide connections to a lot of systems, is that single thread performance you are seeing acceptable to you application?
18:48 Mneumonik yeah, i know... I'm working with a .net app. We have a massive multi million # of small files in a NAS FS and looking to cluster it to horizontally scale it. nfs seems to give better performance over cifs
18:48 JoeJulian Gilbs: So it's doing some sort of scan of the GlusterFS ports? Any idea what it's doing to them?
18:49 dblack joined #gluster
18:50 JoeJulian Mneumonik: True, though if you tune samba and use the libgfapi vfs you should, theoretically, be able to achieve better performance than nfs.
18:50 Gilbs JoeJulian:  That is the crazy part we're trying to figure out, when we were running 3.3 we had no issues what so ever.  Now we're on 3.4.2 and 3.5 and once these scans start, all memory is taken up by glusterfs/d.
18:50 dbruhn JoeJulian, is there a decent write up floating around that you've seen on that yet?
18:50 dbruhn @samba
18:50 glusterbot dbruhn: (#1) Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/pub/gluster/glusterfs/samba/, or (#2) Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/pub/gluster/glusterfs/samba/ mor information about alternate samba
18:50 glusterbot dbruhn: configurations can be found at http://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/, or (#3) Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/pub/gluster/glusterfs/samba/, or (#2) Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are (1 more message)
18:51 JoeJulian dbruhn: No, my theoretical "better than" is all in my head at this point.
18:51 dbruhn Looks like the lalatendumohanty blog article is down too
18:52 theron_ joined #gluster
18:52 dbruhn http://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
18:52 dbruhn never mind, client link had the comma in it
18:53 rahulcs joined #gluster
18:54 JoeJulian I think that factoid needs to be optimized a bit...
18:54 dbruhn lol
18:56 JoeJulian @forget samba 2
18:56 glusterbot JoeJulian: The operation succeeded.
18:57 JoeJulian @forget samba 3
18:57 glusterbot JoeJulian: Error: Invalid factoid number.
18:57 JoeJulian @forget samba 4
18:57 glusterbot JoeJulian: Error: Invalid factoid number.
18:57 JoeJulian @learn samba more information about alternate samba configurations can be found at http://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
18:57 glusterbot JoeJulian: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
18:57 JoeJulian @learn samba as more information about alternate samba configurations can be found at http://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
18:57 glusterbot JoeJulian: The operation succeeded.
18:57 JoeJulian @samba
18:57 glusterbot JoeJulian: (#1) Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/pub/gluster/glusterfs/samba/, or (#2) Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/pub/gluster/glusterfs/samba/, or (#2) Samba 4.1.0 RPMs for Fedora 18, 19, 20,
18:57 glusterbot JoeJulian: 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/pub/gluster/glusterfs/samba/, or (#3) more information about alternate samba configurations can be found at http://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
19:01 Ark joined #gluster
19:01 JoeJulian @forget samba 1
19:01 glusterbot JoeJulian: The operation succeeded.
19:01 JoeJulian @forget samba 2
19:01 glusterbot JoeJulian: The operation succeeded.
19:02 JoeJulian @forget samba
19:02 glusterbot JoeJulian: The operation succeeded.
19:03 JoeJulian @learn samba as  Samba 4.1.0 RPMs for Fedora 18+ with the new GlusterFS libgfapi VFS plug-in are available at http://download.gluster.org/pub/gluster/glusterfs/samba/
19:03 glusterbot JoeJulian: The operation succeeded.
19:03 JoeJulian @learn samba as more information about alternate samba configurations can be found at http://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
19:03 glusterbot JoeJulian: The operation succeeded.
19:03 JoeJulian @samba
19:03 glusterbot JoeJulian: (#1) Samba 4.1.0 RPMs for Fedora 18+ with the new GlusterFS libgfapi VFS plug-in are available at http://download.gluster.org/pub/gluster/glusterfs/samba/, or (#2) more information about alternate samba configurations can be found at http://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
19:03 JoeJulian That looks better.
19:04 JoeJulian ... except for that comma...
19:05 JoeJulian @samba
19:05 glusterbot JoeJulian: (#1) Samba 4.1.0 RPMs for Fedora 18+ with the new GlusterFS libgfapi VFS plug-in are available at http://download.gluster.org/pub/gluster/glusterfs/samba/ , or (#2) more information about alternate samba configurations can be found at http://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
19:05 JoeJulian That's better.
19:25 dbruhn @party
19:25 glusterbot dbruhn: I do not know about 'party', but I do know about these similar topics: 'paste', 'ports'
19:25 dbruhn @learn party as https://www.youtube.com/watch?v=xemLz_fR1Ac
19:25 glusterbot dbruhn: The operation succeeded.
19:26 dbruhn ;)
19:30 crushkil1 left #gluster
19:33 daMaestro joined #gluster
19:35 edward2 joined #gluster
19:36 andreask joined #gluster
19:37 rahulcs joined #gluster
19:42 purpleidea JoeJulian: i do, but not anyone's gluster setup.
19:47 Mneumonik Anyone know why samba w/ vfs plugin would be extremely slow? My smb.confi includes [test]
19:47 Mneumonik comment = For testing a Gluster volume exported through CIFS
19:47 Mneumonik path = /mnt/gv0
19:47 Mneumonik available = yes
19:47 Mneumonik read only = no
19:47 Mneumonik browsable = yes
19:47 Mneumonik public = yes
19:47 Mneumonik writable = yes
19:47 Mneumonik guest ok = yes
19:47 Mneumonik force user = nobody
19:47 JoeJulian purpleidea: I thought you just wanted to test your module on different distros.
19:47 Mneumonik (those options are so I dont need to auth for this test)
19:47 JoeJulian @paste
19:47 glusterbot JoeJulian: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
19:47 Mneumonik also have kernel share modes = no in global
19:48 JoeJulian Please paste instead of spamming IRC channels.
19:49 JoeJulian oplocks? kernel locks? every other kind of lock that seems to be really inefficient there.
19:51 Mneumonik there's no files on the share yet, even trying to create a folder or create a new file hourglasses for 10 minutes.
19:52 Mneumonik sorry for the spam, i was hoping it would go into one line
19:53 JoeJulian That's certainly abnormal. I would probably look to see what's happening on wireshark.
19:55 Mneumonik actually could it be caused by map to guest = bad user?
20:05 yinyin joined #gluster
20:07 side_control joined #gluster
20:08 side_control guys i need some advice, trying to setup this rhev cluster on top of gluster, i got everything working but performance is abyssmal, 5MB/writes on the 3 brick replicated volume
20:09 side_control over gig-e, it doesn't need to be fast, jsut usable ;)
20:09 purpleidea JoeJulian: yeah just porting it at the moment though
20:15 kmai007 hey guys if i have 2 gluster clusters, and i'd like to merge them into 1, can i do that without rebuilding the servers?
20:20 Chewi joined #gluster
20:27 kmai007 anybody there?
20:28 dbruhn kmai007, sorry not sure the answer
20:29 dbruhn what do you have that your trying to merge together?
20:29 kmai007 i was hoping to be able to grow my bricks out
20:30 kmai007 4 storage servers by adding 2 more to make 6
20:30 kmai007 and just create new bricks to expand on
20:30 kmai007 while the gluster of 2 still keeps serving what it needs to
20:30 semiosis kkeithley: i am enduring persistent disappointment w/r/t the ubuntu MIR
20:30 kmai007 with its own bricks
20:32 dbruhn kmai007, so you want the two servers that exist in their own gluster peer group to become part of the peer group with the other four?
20:32 JoseBravoHome joined #gluster
20:35 semiosis kmai007: there's no easy way to do that.  you might be able to get away with merging the entries in /var/lib/glusterfs/peers though, i've never tried
20:37 kmai007 i'm trying to keep what is existing, and "borrow" some resources by consuming to more servers
20:37 kmai007 i guess it won't work b/c the enumeration of volume ports all start at the same ports
20:38 kmai007 and if i have a volume on different gluster clusters, they'd share the same volume port designation
20:38 dbruhn kmai007, could you make a new volume on the 4 server system, migrate the data into the system, and then add the two servers as new brick servers to the system you want to expand?
20:38 kmai007 dang i answer my own question
20:39 kmai007 dbruhn: good suggestion, that would work, so basically once i have migrate the data, i'd destroy the 2-node gluster
20:39 kmai007 and re-peer them
20:39 dbruhn yep
20:47 Joe630 joined #gluster
20:49 kmai007 but wouldn't that idea of gluster cluster hooking up with another gluster cluster and only borrow the resources....
20:49 kmai007 be awesome
20:50 wushudoin left #gluster
20:50 kmai007 well awesome for me, b/c i want to use its physcial resources
20:51 jbd1 joined #gluster
20:55 Mneumonik joined #gluster
21:05 yinyin_ joined #gluster
21:09 dblack joined #gluster
21:19 necrogami joined #gluster
21:46 dlambrig_ joined #gluster
21:55 kmai007 is this a setting that is set somewhere?  W [rpc-transport.c:175:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
21:55 kmai007 i see it in the gluster cli.log
21:55 kmai007 no volumes are created yet, just peer probin
22:03 nueces joined #gluster
22:05 kmai007 starting over i'm watching as I gluster volume 'set' a feature to a volume on the storage nodes, i'm tailing the /var/log/glusters/bricks/vol.log and i see that within the same second it does an rpc disconnect/reconnect
22:05 kmai007 is that the expected behavior?
22:08 yinyin_ joined #gluster
22:25 jbd1 kmai007: the transport-type thing is just a noisy message.  If you don't specify RDMA it uses tcp, aka "socket". Not sure about the rpc thing
22:25 JoeJulian Yes, rpc reconnection is expected when you change the graph.
22:25 * jbd1 is in the middle of testing what happens when a brick crashes during a rebalance
22:26 jbd1 (in the lab)
22:26 kmai007 gracias
22:26 kmai007 looks like what i did was not what i wanted
22:26 kmai007 my vol info now shows 4 x 2 = 8
22:26 kmai007 but i want 2 x 4 = 8
22:26 kmai007 for distr.-rep
22:26 kmai007 JoeJulian:
22:27 kmai007 when i peer new nodes, but i create brand new volumes, i shouldn't have to run a rebalance do i?
22:27 JoeJulian no
22:27 kmai007 i don't want to add new bricks to exisiting gluster volumes
22:28 JoeJulian You only have to rebalance when you change the number of distribute subvolumes within a volume.
22:28 kmai007 gotcha, so i'm in the green,
22:28 kmai007 hmm i want 2x4=8 distribution
22:29 kmai007 not sure how i got 4x2=8 distro
22:29 jbd1 for my test: I created a replica volume, filled it up with random files and dirs, then grew the volume to 2x2 (d-r).  After starting a rebalance, I killed one of the new bricks.  After waiting a few minutes, I started the brick back up.  What it *appears* to be doing is completing the rebalance on the nodes that did not crash, then allowing the self-heal daemon on the crashed node to make it match its replica.
22:30 kmai007 jbd1: intersting, i never thought about observing that
22:30 kmai007 thanks
22:30 jbd1 I'm deducing this from the fact that gluster volume rebalance status says "not started" for the crashed node
22:31 kmai007 i guess that makes sense, while a server is down, you cannot keep distribution until it is healed, so its easier to distribute to others and heal it...
22:31 jbd1 kmai007: sorry for getting in the way of your discussion.  I've just had bad luck with stuff breaking in the middle of my rebalances, and the larger my volume gets, the more vulnerable I am to it (as rebalances take a long time)
22:31 kmai007 no no i'm learning, please continue
22:32 jbd1 Even my relatively-tiny test rebalance will probably take 12-24 hours to complete so nothing certain yet
22:32 JoeJulian The wider the dist the faster a rebalance should be. (theoretically)
22:32 jbd1 JoeJulian: that's promising.
22:34 jbd1 my "lab" is actually my (beefy) laptop with a bunch of VMs on it.  It's IO-bound given that all the volumes share the same 7200rpm disk.  I'm only seeing about 4 mbps (megabits) on the bridge between my four nodes
22:34 JoeJulian rebalance is run at a lower priority, so any activity on your volume should slow it down.
22:34 jbd1 there's no activity
22:35 JoeJulian Also, unless there's actually something moving, it's mostly just crawling directories.
22:36 kmai007 ohs hit
22:36 kmai007 is replica 4 support?
22:37 kmai007 on gluster3.4.2 ?
22:37 kmai007 i might have been building my replica's all wrong.......
22:37 jbd1 JoeJulian: yeah, it's crawling directories.  I intentionally made a bunch of directories (1,262,656 of them!) because my prod env is like that.
22:37 JoeJulian yep, but it's likely way overkill unless you're trying to design a 99.999999999 uptime system.
22:37 jbd1 kmai007: do you want 4 copies of every file?
22:37 kmai007 nope, i want 2 copies
22:37 jbd1 kmai007: that would be replica 2 then
22:38 kmai007 cool, but is replica 4 supported?
22:38 JoeJulian yes
22:38 JoeJulian supported but not very tested.... for what that's worth...
22:39 kmai007 b/c i remember reading somewhere, maybe redhat storage, that replica > 2 is tech preview. unsupported if greater
22:39 kmai007 ohs it
22:39 JoeJulian rhs doesn't support a lot of things that are common uses.
22:39 kmai007 i hear ya man, just trying to read where i can
22:40 JoeJulian Just remember that any commercial company is going to only support use cases that are cost effective for them to support.
22:40 kmai007 basically when i chose replica 4,  of my8 servers, i get 2 equal parts of data
22:40 kmai007 true....
22:40 JoeJulian @brick order
22:40 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
22:40 JoeJulian You should be able to extrapolate from that.
22:41 kmai007 correct, so i've used that pattern schema
22:41 JustinClift JoeJulian: Saw your email about kkeithley checking cppcheck.  That's good news.
22:43 JoeJulian a replica 4, for servers named sN with bricks on /b1 a replica 4 over 8 servers would group s{1..4}:/b1 and s{5..8}:/b1 distributing across those two replica subvolumes.
22:44 kmai007 correcto'
22:44 kmai007 that what i have in production now
22:44 cvdyoung Ok, I know what I was doing wrong now.  I had the local mount going out the 1G NIC...
22:44 cvdyoung I still see gluster mounts slower than a regular nfs mount.  NFS 580MB/s write 780MB/s read, and GLUSTER 455MB/s write, 360MB/s read.  Why is this???
22:44 cvdyoung Thanks!
22:45 avati cvdyoung, what is the block size you are using for IO?
22:46 JoeJulian Probably in part due to the artificial numbers that nfs gives you due to fscache.
22:46 kmai007 yeh echo 3> something, and test again
22:46 JoeJulian /proc/vm/drop_caches
22:46 JoeJulian /proc/sys/vm/drop_caches
22:47 JoeJulian I'm not entirely sure fscache honors that though.
22:48 kmai007 anybody got repeatable steps i can follow to test the limits of gluster/possibly break it, ?  i'm trying to "recreate" my nightmare from a production migration attempt to gluster and I cannot get it to happen in my R&D environment
22:48 kmai007 fscache, i suppose u could unmount/ remount
22:50 kmai007 JoeJulian: you wouldn't think that i'd gain any performance in my replica distribution would i?
22:50 kmai007 if i stuck to the replica=2 case
22:54 JoeJulian kmai007: If you have enough simultaneous requests for single files that you're saturating your two replicas, then yes, I would add more replicas.
22:55 JoeJulian For instance, if you're streaming a Captain America 2 file, odds are you're going to need to feed that from a hell of a lot more than two servers.
22:55 kmai007 agreed, i thought that was why i wanted more replicas in my8 servers
22:56 bgpepi joined #gluster
22:56 kmai007 +1 great movie by the ay
22:56 kmai007 way*
22:56 JoeJulian If, however, you're maxing out your servers serving multiple files, distribution would better serve you.
22:57 kmai007 example?
22:57 kmai007 i doubt i'm maxing out anything
22:58 JoeJulian Pandora streams millions of songs simultaneously. They don't have to worry about replica count because they don't give you the song you're interested in. Instead they give you something else similar. They could be simultaneously serving a million individual songs which could then be distributed over thousands of servers using dht.
22:59 kmai007 hilarious
22:59 JoeJulian brilliant data distribution model imho.
23:00 kmai007 thanks JoeJulian, goodnight glusterbot
23:00 JoeJulian See you later kmai007
23:09 yinyin joined #gluster
23:33 Ark joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary