Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 badone_ joined #gluster
00:05 theron joined #gluster
00:36 yinyin_ joined #gluster
00:37 vpshastry joined #gluster
00:45 gdubreui joined #gluster
00:56 vpshastry joined #gluster
01:00 chirino joined #gluster
01:04 dbruhn joined #gluster
01:08 jmarley joined #gluster
01:08 jmarley joined #gluster
01:30 Honghui joined #gluster
01:45 DV joined #gluster
01:46 Honghui joined #gluster
02:13 haomaiwa_ joined #gluster
02:19 DV joined #gluster
02:38 DV joined #gluster
02:50 DV joined #gluster
02:54 Honghui joined #gluster
02:56 bharata-rao joined #gluster
02:59 bala joined #gluster
02:59 yinyin- joined #gluster
02:59 RameshN joined #gluster
03:39 Franklu joined #gluster
03:48 shubhendu joined #gluster
03:55 haomaiw__ joined #gluster
04:07 raghug joined #gluster
04:21 vpshastry joined #gluster
04:25 haomaiwa_ joined #gluster
04:30 ppai joined #gluster
04:32 sputnik13 joined #gluster
04:42 DV joined #gluster
04:46 sputnik13 joined #gluster
04:49 joshin left #gluster
04:51 pvh_sa joined #gluster
04:53 kanagaraj joined #gluster
04:54 haomai___ joined #gluster
04:54 saurabh joined #gluster
04:58 atinmu joined #gluster
04:58 sputnik13 joined #gluster
05:02 DV joined #gluster
05:08 sputnik13 joined #gluster
05:09 davinder joined #gluster
05:10 Franklu joined #gluster
05:10 bala joined #gluster
05:11 ravindran1 joined #gluster
05:12 raghug joined #gluster
05:13 prasanth_ joined #gluster
05:14 Philambdo joined #gluster
05:15 pvh_sa joined #gluster
05:17 dusmant joined #gluster
05:20 kdhananjay joined #gluster
05:20 nishanth joined #gluster
05:20 nthomas joined #gluster
05:21 y4m4 raghug: ping
05:21 glusterbot y4m4: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
05:21 y4m4 glusterbot: shut up!
05:23 y4m4 raghug: about the patch - i don't see the same regression on my laptop, some kind of a transient bug - i bet if you run the regresion tests again they wouldn't show up
05:25 y4m4 raghug: but surprisingly the change you requested doesn't work - may be the implementation on my end was wrong. I might not have much bandwidth to test the whole thing about this week
05:25 DV joined #gluster
05:25 Honghui joined #gluster
05:25 y4m4 raghug: so perhaps you can cherry-pick the change.
05:28 sputnik13 joined #gluster
05:34 zerick joined #gluster
05:35 lalatenduM joined #gluster
05:39 ricky-ticky joined #gluster
05:42 bala joined #gluster
05:42 sputnik13 joined #gluster
05:45 kanagaraj joined #gluster
05:47 kumar joined #gluster
05:50 itisravi joined #gluster
05:54 rahulcs joined #gluster
05:55 rastar joined #gluster
05:59 glusterbot New news from newglusterbugs: [Bug 1090757] Puppet-Gluster should run fstrim when doing thin-p <https://bugzilla.redhat.co​m/show_bug.cgi?id=1090757>
06:05 ravindran1 joined #gluster
06:06 meghanam joined #gluster
06:06 meghanam_ joined #gluster
06:08 sputnik13 joined #gluster
06:09 sputnik13 joined #gluster
06:09 raghug joined #gluster
06:10 haomaiwang joined #gluster
06:14 sputnik13 joined #gluster
06:17 raghu joined #gluster
06:17 DV joined #gluster
06:19 nshaikh joined #gluster
06:29 Philambdo joined #gluster
06:34 ktosiek joined #gluster
06:38 bala joined #gluster
06:41 kasturi joined #gluster
06:45 ron-slc joined #gluster
06:50 edward2 joined #gluster
06:50 ktosiek hmm, is there some "live upgrade" path for 3.2 -> 3.4? If I upgrade the client first, and then only stop the server for a short time, will it all work?
06:51 samppah 3.2 and 3.4 aren't compatible together so no..
06:52 samppah not sure if 3.2 -> 3.3 -> 3.4 is any better
06:53 ktosiek good to know, sucks that Ubuntu saucy was still 3.2 :-/
06:53 ekuric joined #gluster
06:54 ktosiek ok, that means I have to upgrade *before* putting this boxes in production. Is the semiosis PPA reliable?
06:54 samppah afaik it is reliable
06:55 samppah semiosis is good guy :)
06:59 ktosiek cool ^_^
07:02 deepakcs joined #gluster
07:03 VerboEse joined #gluster
07:04 dusmant joined #gluster
07:10 eseyman joined #gluster
07:17 kanagaraj_ joined #gluster
07:17 rjoseph joined #gluster
07:17 ctria joined #gluster
07:23 bala joined #gluster
07:25 edward2 joined #gluster
07:25 DV joined #gluster
07:28 sputnik13 joined #gluster
07:28 overclk joined #gluster
07:32 misuzu joined #gluster
07:37 rbw joined #gluster
07:41 psharma joined #gluster
07:42 fsimonce joined #gluster
07:49 dusmant joined #gluster
07:50 Ark joined #gluster
07:51 rahulcs joined #gluster
07:51 bala joined #gluster
07:52 raghug joined #gluster
07:52 harish_ joined #gluster
07:54 sputnik13 joined #gluster
07:55 ngoswami joined #gluster
08:02 bharata-rao joined #gluster
08:06 rahulcs joined #gluster
08:12 rbw joined #gluster
08:12 andreask joined #gluster
08:18 sputnik13 joined #gluster
08:22 giannello joined #gluster
08:26 bala1 joined #gluster
08:30 glusterbot New news from newglusterbugs: [Bug 1090807] Remove autogenerated xdr routines and coroutines <https://bugzilla.redhat.co​m/show_bug.cgi?id=1090807>
08:41 prasanth_ joined #gluster
08:42 Slashman joined #gluster
08:45 Durzo joined #gluster
08:50 Philambdo joined #gluster
09:00 vimal joined #gluster
09:02 haomaiw__ joined #gluster
09:11 Honghui joined #gluster
09:12 Paul-C joined #gluster
09:13 saravanakumar joined #gluster
09:22 bala1 joined #gluster
09:24 rahulcs joined #gluster
09:27 vpshastry1 joined #gluster
09:35 ravindran1 joined #gluster
09:44 jmarley joined #gluster
09:44 jmarley joined #gluster
09:52 vpshastry1 joined #gluster
09:58 edward2 joined #gluster
10:00 prasanthp joined #gluster
10:01 dusmant joined #gluster
10:03 Honghui joined #gluster
10:04 GabrieleV joined #gluster
10:14 bala1 joined #gluster
10:15 ravindran1 joined #gluster
10:28 rastar joined #gluster
10:32 Ark joined #gluster
10:33 rahulcs joined #gluster
10:35 zorgan joined #gluster
10:37 crashmag joined #gluster
10:38 kasturi joined #gluster
10:39 qdk_ joined #gluster
10:41 kasturi joined #gluster
10:46 ira_ joined #gluster
10:50 shubhendu joined #gluster
10:51 rahulcs joined #gluster
10:54 pk joined #gluster
10:58 Humble joined #gluster
11:03 bala1 joined #gluster
11:04 aravindavk joined #gluster
11:06 Jakey joined #gluster
11:07 rbw joined #gluster
11:07 bfoster joined #gluster
11:24 kkeithley1 joined #gluster
11:27 rahulcs joined #gluster
11:29 ngoswami joined #gluster
11:33 Andy5__ joined #gluster
11:34 fsimonce` joined #gluster
11:35 kanagaraj__ joined #gluster
11:42 shubhendu joined #gluster
11:43 rahulcs joined #gluster
11:44 pdrakeweb joined #gluster
11:45 kanagaraj joined #gluster
11:47 jmarley joined #gluster
11:47 jmarley joined #gluster
11:49 kanagaraj_ joined #gluster
11:49 kdhananjay joined #gluster
11:54 tdasilva joined #gluster
11:54 bala1 joined #gluster
11:56 dusmant joined #gluster
12:03 rahulcs joined #gluster
12:05 RameshN joined #gluster
12:06 kanagaraj joined #gluster
12:10 davinder joined #gluster
12:13 portante kkeithley: good morning
12:13 kdhananjay joined #gluster
12:13 davent joined #gluster
12:14 davent Is anyone available to help me with a SSL related issue?
12:17 Jakey left #gluster
12:21 Philambdo joined #gluster
12:21 rwheeler joined #gluster
12:22 itisravi joined #gluster
12:24 dusmant joined #gluster
12:27 ppai joined #gluster
12:32 elico joined #gluster
12:34 Ark joined #gluster
12:34 pk left #gluster
12:35 bennyturns joined #gluster
12:45 Andy5__ @glusterbot: commands
12:45 Andy5__ glusterbot: help
12:45 glusterbot Andy5__: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin. You may also want to use the 'list' command to list all available plugins and commands.
12:45 Andy5__ glusterbot: list
12:45 glusterbot Andy5__: Admin, Alias, Anonymous, Bugzilla, Channel, ChannelStats, Conditional, Config, Dict, Factoids, Google, Herald, Later, MessageParser, Misc, Network, NickCapture, Note, Owner, Plugin, RSS, Reply, Seen, Services, String, Topic, URL, User, Utilities, and Web
12:46 kkeithley1 portante: ping
12:46 glusterbot kkeithley1: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
12:47 diegows joined #gluster
12:47 portante glusterbot: ping
12:47 glusterbot pong
12:47 rahulcs joined #gluster
12:47 portante kkeithley_:  here
12:48 kanagaraj_ joined #gluster
12:48 kkeithley_ I have a couple python-ish questions, got a minute? Or ten?
12:48 portante y
12:49 portante what's up?
12:49 kkeithley_ First one is running python setup.py during build
12:49 kkeithley_ one sec
12:50 kkeithley_ make that `python setup.py install` on a sles11 box there's a config file that's making it install in /usr/local
12:51 kkeithley_ let me see if I can find it
12:51 kanagaraj__ joined #gluster
12:51 kkeithley_ it's got prefix='/usr/local'
12:52 portante hmm
12:52 portante really
12:52 portante I wonder why, what is the package, is this gluster, I assume?
12:53 kkeithley_ yeah, gluster geo-rep and glupy xlator
12:55 kkeithley_ I was trying to override that by adding a param "prefix=/usr" to our two setup.py files. It didn't work. Was wondering if you had any thoughts? Otherwise I just have to move the config file out of the way during builds
12:57 kkeithley_ Naturally now I can't find the file, unless maybe it was on the opensuse box instead. Let me check
12:59 portante something sounds not right there
12:59 kkeithley_ yeah. Well, it's suse/sles for one. ;-)
13:00 portante usually, a source tree does not, or probably should not, have a special setting for the install directory prefix
13:00 portante :)
13:02 B21956 joined #gluster
13:02 kkeithley_ source tree? It's actually a problem building the rpms. I was trying to patch the two setup.py files at rpm build time to override the prefix and not actually modify our setup.py(.in) files in the source tree
13:02 kkeithley_ and avoid the problem I'm having now of trying to remember where this config file is to move out of the way.
13:03 kanagaraj joined #gluster
13:04 kkeithley_ found it. The config file in this case is /usr/lib64/python2.6/distutils/distutils.cfg has a line "prefix=/usr/local"
13:05 japuzzo joined #gluster
13:05 kkeithley_ I think I want our python files in /usr/lib64/python2.6/site-packages on sles, just like they are in Fedora, RHEL, CentOS, etc.
13:06 portante is there a /usr/lib/python2.6 on SLES?
13:06 kkeithley_ nope
13:06 portante and I thought you can override that prefix setting with a setup.cfg file of your own
13:07 kkeithley_ okay, I can try that too.
13:08 portante what section is that prefix= in?
13:08 kkeithley_ [install]
13:10 portante can you just add --prefix to the python setup.py install line?
13:10 kkeithley_ ah, dunno, I'll try that
13:10 Durzo joined #gluster
13:11 portante https://docs.python.org/2/install/#alter​nate-installation-unix-the-prefix-scheme
13:11 glusterbot Title: Installing Python Modules Python v2.7.6 documentation (at docs.python.org)
13:11 portante good boy, glusterbot
13:11 rahulcs joined #gluster
13:11 kkeithley_ your google fu is better than mine was the other day.
13:12 portante :)
13:15 kkeithley_ and the other question relates to our new geo-rep. our gsyncd.py has a line with --ssh-command-tar, and we have code in glusterd to exec ssh-command-tar, making me think this might be a python thing. Some people installing 3.5.0 are seeing an error "failure: not a valid option: ssh-command-tar" when they install the geo-rep rpm.  I'm having no luck finding anything in the python packages, or any packages.
13:21 deepakcs joined #gluster
13:21 chirino joined #gluster
13:22 kkeithley_ my google fu is failing me yet again.
13:22 plarsen joined #gluster
13:23 ndevos kkeithley_: I do not think glusterd exec's ssh-command-tar, it seems to add ss-command-tar as an argument, I assume is adds --ssh-command-tar with that call
13:24 kkeithley_ sorry, yes, that's right
13:33 kkeithley_ so not a python thing
13:35 kkeithley_ cool, two problems sorted
13:35 kkeithley_ except I missed a phone call
13:36 theron joined #gluster
13:38 Steved joined #gluster
13:39 Steved Anyone know how to manually remove a geo-replication agreement?
13:39 Steved And preferably keep gluster volumes online
13:41 kanagaraj_ joined #gluster
13:41 nullck joined #gluster
13:41 nullck_ joined #gluster
13:45 ctria joined #gluster
13:46 kanagaraj__ joined #gluster
13:46 von joined #gluster
13:50 von hello, having some issues figuring out if gluster is suitable for my purposes. I run two servers that store data in a specific directory /dirname (not as a gluster mount). I want to mount those two dirs (host1:/dirname and host2:/dirname) on another host remotely to read only, while writing data locally into dirs. The thing is, data on one of the bricks is completely ignored unless the directory structure is
13:50 von exactly the same.
13:53 von in other words, is there any way to make writing to bricks locally work or do I absolutely have to write to the gluster mount?
13:53 Steved write to the gluster mount
13:53 von can I prioritize writing to the specific brick on the specific server in the distribute mode?
13:54 Steved all doc's point to never manually modifying the underlying file system
13:54 von that makes gluster pointless in my use case :( thanks
13:54 Steved re: prioritizing, I've never seen that option - maybe tell us what the use case is?
13:55 ndevos von: there is a NUFA translator that will write to the brick on localhost
13:56 ndevos von: you will still need to write through a glusterfs-mountpoint, but at least it should not send the data over the network to the other brick
13:56 von ndevos, thanks
13:57 sroy_ joined #gluster
13:57 Steved ndevos do you know if theres a method of removing a bad geo-rep agreement while keeping volumes online?
13:58 foster joined #gluster
13:58 ndevos von: lacking browsable docs on the gluster.org website, you might find this interesting: https://access.redhat.com/site/documen​tation/en-US/Red_Hat_Storage/2.1/html-​single/Administration_Guide/index.html​#sect-User_Guide-Managing_Volumes-NUFA
13:59 glusterbot Title: Administration Guide (at access.redhat.com)
14:00 ndevos Steved: uh, no, I dont really know about that :-/
14:01 Steved ndevos: any idea where I should ask this question? The mailing list rarely gets responses
14:03 ndevos Steved: if you post to the mailinglist includes clear details, error messages and all, I would expect someone to respond there
14:03 ndevos Steved: this channel works too, but the number of people that use geo-rep is relatively small, I think
14:03 Steved ndevos: hmm, what is the typical backup strategy people use?
14:04 ndevos well, maybe not the number of users, but there are few people that know how to trouibleshoot geo-rep
14:04 ndevos Steved: geo-rep is indeed one of the preferred solutions for backups
14:04 von Steved, the use case is, data is stored on two separate servers, but needs to be accessed from one place. Otherwise it involves some really weird http proxying based on requests.
14:05 ndevos von: nufa and a plain dht volume sounds like a good match for that
14:05 zaitcev joined #gluster
14:05 von ndevos, thanks, I'll look into that
14:06 Steved ndevos: I'll give the mailing list a shot, appreciate it
14:07 ndevos Steved: some of the geo-rep devs tend to respond to the mailinglist, but are not always on irc, so I think the list has a higher chance on success
14:10 kanagaraj joined #gluster
14:10 John_HPC kkeithley: I erased/installed both glusterfs-server and glusterfs-geo. I was unable to get that failure with ssh-command-tar to appear
14:12 lmickh joined #gluster
14:13 ctria joined #gluster
14:14 jobewan joined #gluster
14:15 von ndevos, does glusterfs 3.2.7 support NUFA?
14:17 ndevos von: I dont think so, and 3.2.7 is not updated/bugfixed anymore, you really want to move to 3.4 or 3.5
14:17 von yeah, I guess I have no choice in that case :)
14:19 wushudoin joined #gluster
14:20 LoudNoises joined #gluster
14:22 theron joined #gluster
14:23 jskinner joined #gluster
14:26 plarsen joined #gluster
14:31 von ndevos, do I have to have a local brick on EVERY host I mount the volume on if I enable NUFA?
14:32 gmcwhistler joined #gluster
14:33 ndevos von: I think you need a local brick for every storage server that creates files on the volume
14:33 ndevos von: I have not tested it myself yet, and others that definitely know about it dont seem to be online atm
14:35 von attemts at mounting fail with “[nufa.c:641:init] 0-testvol-dht: Could not find specified or local subvol” error
14:35 von *attempts
14:47 sticky_afk joined #gluster
14:48 stickyboy joined #gluster
14:49 von http://review.gluster.org/#/c/5414/
14:49 glusterbot Title: Gerrit Code Review (at review.gluster.org)
14:49 von apparently it has been merged into 3.5
14:50 von current stable requires the local brick
14:52 gdavis33 left #gluster
14:55 RameshN joined #gluster
14:58 RameshN joined #gluster
15:01 glusterbot New news from newglusterbugs: [Bug 1086783] Add documentation for the Feature: qemu 1.3 - libgfapi integration <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086783>
15:05 rahulcs joined #gluster
15:05 ndevos von: it should be in 3.5, and maybe in 3.4 too... maybe nufa should be able to mount from anywhere when read-only?
15:06 von ndevos, nah, attempts at mounting it read only result in the same error
15:06 ndevos von: right, but that could be considered a bug
15:07 von doesn't really matter all that much, since NUFA prioritizes local storage and I don't write anything from the web-server host, this shouldn't be a problem
15:07 ekuric left #gluster
15:07 ndevos it might intended behaviour, or just a use-case the developers did not think about
15:07 von it's in the list http://www.gluster.org/community/docu​mentation/index.php/Backport_Wishlist
15:07 glusterbot Title: Backport Wishlist - GlusterDocumentation (at www.gluster.org)
15:08 von so maybe it will be backported at some point
15:08 ctria joined #gluster
15:08 theron joined #gluster
15:10 ndevos von: okay, and you're on 3.4.x now? not moving to 3.5 yet?
15:11 von ndevos, not sure yet, I'll be releasing servers in production, having doubts switching to the beta branch
15:12 kanagaraj joined #gluster
15:12 von might give it a try though
15:12 shubhendu joined #gluster
15:15 sauce_ very general glusterfs question: does the following require the creation of separate glusterfs volumes?  i want to set up just 2 glusterfs servers for my network, and share them among many customers. i do not want the customers to see eachother's data.
15:18 ndevos von: 3.5 has been release about 2 weeks ago, it is out of beta
15:18 von err
15:18 von thanks
15:18 von I'll look into that xD
15:19 von it still says beta on the downloads page, but yeah, seems like 3.5 is actually released
15:20 kanagaraj_ joined #gluster
15:23 ndevos JoeJulian: I think you fixed the homepage? maybe you can update http://www.gluster.org/download/ too?
15:23 jag3773 joined #gluster
15:23 kanagaraj_ joined #gluster
15:26 kanagaraj__ joined #gluster
15:33 dbruhn joined #gluster
15:40 deeville joined #gluster
15:50 foster joined #gluster
15:53 kaptk2 joined #gluster
15:54 cdez joined #gluster
15:56 theron_ joined #gluster
15:59 sputnik13 joined #gluster
16:00 Philambdo joined #gluster
16:00 hagarth joined #gluster
16:02 Ylann joined #gluster
16:02 Ylann joined #gluster
16:03 Ylann joined #gluster
16:07 John_HPC sauce_: Depending on your requirements will determine how you set it up. You could setup group permissions and have different folders setup.  /mnt/gluster/customer1  /mnt/gluster/customer2
16:08 John_HPC their data will reside on the same physical systems, but managed by simple linux group permissions
16:08 John_HPC how if there are stricker requirements for seperation of data, you may need to setup different servers.
16:09 John_HPC now if*
16:09 _dist joined #gluster
16:10 nueces joined #gluster
16:10 tru_tru joined #gluster
16:12 Mo__ joined #gluster
16:18 ctria joined #gluster
16:19 vpshastry joined #gluster
16:22 vpshastry left #gluster
16:24 dbruhn sorry missed the rest of the question what was he asking?
16:24 pk joined #gluster
16:25 hagarth joined #gluster
16:26 John_HPC dbruhn: <sauce_> very general glusterfs question: does the following require the creation of separate glusterfs volumes?  i want to set up just 2 glusterfs servers for my network, and share them among many customers. i do not want the customers to see eachother's data.
16:27 dbruhn If he has the luxury of setting up different volumes that would actually probably be ideal, IMHO. He could do it on the same storage devices. But John_HPC is right too, you can do it through linux permissions as well.
16:28 John_HPC a lot of it depends on cost, time to setup, and operational secuity requriments
16:28 pk do we have paul who raised the bug 1089758 here?
16:28 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1089758 high, unspecified, ---, pkarampu, NEW , KVM+Qemu + libgfapi: problem dealing with failover of replica bricks causing disk corruption and vm failure.
16:28 John_HPC especially if the data could be considered ITAR/EAR99, Proprietary, or any type of classified/confidential
16:31 JoseBravo joined #gluster
16:31 John_HPC ITAR/EAR99 = Export Controlled, cryptographic, weapons systems data, etc...
16:32 JoseBravo I'm trying to configure automatically mount using the gluster client but with a failover option, in case one gluster server goes down the mount point continue working. In http://gluster.org/community/documentation/index​.php/Gluster_3.1:_Automatically_Mounting_Volumes I can't find nothing about it
16:32 glusterbot Title: Gluster 3.1: Automatically Mounting Volumes - GlusterDocumentation (at gluster.org)
16:32 dbruhn JoseBravo, are you using the gluster fuse client?
16:32 dbruhn or NFS
16:33 JoseBravo dbruhn, gluster fuse client
16:33 dbruhn Does it automatically
16:33 dbruhn the client connects to all of the servers for the volume
16:33 dbruhn the initial connection is to establish the mount, and notify the client of all the servers to attach to
16:33 JoseBravo Only if I specify one ip address?
16:34 dbruhn The client connects to the first IP, an manifest of all the brick servers is returned, then the client connects to all of the brick servers directly
16:34 foster joined #gluster
16:36 JoseBravo dbruhn, nice... but what happen if it try to mount at the time the service in the fstab is down?
16:36 dbruhn well then you've got an issue, how often are you planning on having nodes down in your storage?
16:37 dbruhn s/nodes/servers/
16:37 glusterbot What dbruhn meant to say was: well then you've got an issue, how often are you planning on having servers down in your storage?
16:37 jbd1 JoseBravo: you can specify backup-volfile-server=<ip> in your fstab and the fuse mount will connect to that if the primary is down, I believe
16:38 jbd1 s/backup-volfile/backupvolfile/
16:38 glusterbot What jbd1 meant to say was: JoseBravo: you can specify backupvolfile-server=<ip> in your fstab and the fuse mount will connect to that if the primary is down, I believe
16:38 JoseBravo Ok, perfect
16:39 jbd1 I use a hostname there, but same difference
16:39 dbruhn jdb1, have you used that option before?
16:43 Gilbs1 joined #gluster
16:44 jbd1 dbruhn: I haven't tested it, but it's in the docs
16:44 jbd1 (and I have it in my fstab)
16:44 jbd1 oh, I actually _have_ tested it, but only in the lab
16:47 pk left #gluster
16:53 JoseBravo I just installed a GlusterFS, 4TB volume replicated in two servers, each server has 36GB ram, dual six core processor, RAID 10 SATA III, and everything is in 1Gbit (both GlusterFS are using a LACP 2x2Gbit bonding interface). I ran a test with iozone an it was my result: http://pastebin.com/3XFKkkv5
16:53 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:53 JoseBravo What do you think about this result?
16:55 Gilbs1 Anyone else having issues with security scans and glusterd/fs eating up all the memory?  Yesterday I told oom-killer to not kill gluster but halfway though the security scan glusterd/fs eats up all the memory until the box reboots itself.  If I stop glusted and the mount and then scan the box is fine.    (just upgraded) 3.5.0 Centos 6.5 16G RAM
16:58 pk joined #gluster
17:00 rahulcs joined #gluster
17:06 samppah JoseBravo: did you run that test from separate client or from servers acting as client?
17:07 pk left #gluster
17:10 JoeJulian ndevos: done
17:11 ndevos thanks JoeJulian!
17:14 JoseBravo samppah from separate client
17:14 JoseBravo dd is giving me: 1048576000 bytes (1.0 GB) copied, 17.8786 s, 58.6 MB/s after some perforamance changes
17:16 JoseBravo How it really works? it write only to one of the gluster servers and then the gluster server sync to the other gluster server?
17:17 JoeJulian The client connects to all the servers and writes to each replica directly.
17:20 Andy5__ joined #gluster
17:23 diegows joined #gluster
17:29 rahulcs joined #gluster
17:30 jbrooks Hey guys, if I have gluster running on three machines, but bricks on only two of the machines, for a replica 2 setup, can the third (w/o bricks) still help provide quorum?
17:32 Matthaeus joined #gluster
17:32 glusterbot New news from newglusterbugs: [Bug 1088589] Failure in gf_log_init reopening stderr <https://bugzilla.redhat.co​m/show_bug.cgi?id=1088589>
17:35 rahulcs joined #gluster
17:36 samppah jbrooks: yes it can if it's part of the "trusted pool"
17:37 samppah server query that is if i remember correctly
17:38 jbrooks samppah: thanks -- between replica/brick multiple requirements and machine requirements for quorum... I keep getting confused :)
17:38 samppah jbrooks: i know the feeling :)
17:40 y4m4 ndevos: ping
17:40 glusterbot y4m4: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
17:41 y4m4 ndevos: are you around?
17:41 jbrooks :)
17:41 y4m4 glusterbot: smart buoY!
17:43 scuttle_ joined #gluster
17:47 aurigus joined #gluster
17:51 Matthaeus joined #gluster
17:53 nullck_ joined #gluster
17:53 nullck_ joined #gluster
18:07 ndk joined #gluster
18:07 davinder joined #gluster
18:35 misuzu joined #gluster
18:37 overclk joined #gluster
18:38 rahulcs joined #gluster
18:39 Pavid7 joined #gluster
18:58 edward1 joined #gluster
19:00 John_HPC Are there any recommendations for performance.write-behind-window-size or performance.cache-size?
19:00 JoeJulian As much as your server allows.
19:03 JoeJulian At 60 bricks per server with only 16gb I set cache-size to 8MB to use up 10Gb leaving the rest for filesystem cache.
19:06 John_HPC I have 18x2=36 bricks. 6 physical systems at 12GB of memory. Connected with 10GB backend network.
19:08 VeggieMeat_ joined #gluster
19:09 basso joined #gluster
19:13 tjikkun joined #gluster
19:13 tjikkun joined #gluster
19:13 sauce_ John_HPC dbruhn thanks for answering my question.  separate volumes would be a luxury at this time because it is creating them is a manual process. i have some idea on automating it, but that will come up later.  *nix permissions is also a good option.  setting directories to 770 for example.  they will still be able to list the root of the shared glusterfs, but they won't be able to enter any dirs
19:15 John_HPC no problem. Depending on anonymity, just assign them folders like "customer#" or "project#"
19:15 tjikkun_work joined #gluster
19:15 monotek joined #gluster
19:22 dbruhn sauce_, don't you have to manually assign permissions as well? You could easily script the action to make it bulletproof, and purpleidea has his puppet modules is you use that.
19:23 diegows joined #gluster
19:23 sauce_ yes i am using puppet too!! i love puppet.  unfortunately purpleidea's is only for EL-based systems. I am using ubuntu.  also he warns that it's beta status
19:23 Matthaeus joined #gluster
19:23 jhp joined #gluster
19:23 sauce_ permissions are set via any client, whereas gluster volume creation is only done on the server
19:24 sauce_ so permissions are more automateable for me, at this time
19:24 semiosis glusterbot: ping
19:24 glusterbot pong
19:24 dbruhn fair
19:24 sauce_ when i said "i have some idea on automating it, but that will come up later.".... my idea was adding to purpleidea's project to make it ubuntu compatible
19:25 dbruhn There is a really excellent way to help purpleidea get his puppet module out of beta and into production on ubuntu.... ;)
19:25 sauce_ i would do my best to beef up the EL side of it too
19:25 sauce_ AWS, gluster, puppet, will be joined together in matrimony
19:26 jhp Good evening everyone. I would like to discuss a usecase and I would like to know if it is possible in a normal manner. We are currently building a 3 DC setup and we need a spread storage pool in such a setup that we can loose one DC and still be fully operational. Every DC has for the moment 4 nodes with storage bricks.
19:26 sauce_ jhp what type of pipe do you have inbetween DCs
19:26 jhp 10Gbit interconnects.
19:27 semiosis how much latency?
19:27 dbruhn also is it a requirement that the data is synchronously replicated or is async ok
19:27 jhp We are aiming at SPB on Avaya. max 2ms.
19:27 jhp We need to know that when a node writes the data that all clients are sure to have this data.
19:28 kmai007 joined #gluster
19:28 semiosis you mean when a client writes the data, all servers are sure to have the data.  see ,,(glossary)
19:28 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
19:28 jhp When data is written in DC 1, and someone in DC 2 needs it a few seconds later it should be available.
19:29 dbruhn what are the characteristics of the data?
19:29 semiosis and whats the application?
19:29 kmai007 anybody have a chance, please help me interpret what gluster is doing? [2014-04-23 20:54:04.175993] D [client.c:2050:client_rpc_notify] 0-dyn_cfu-client-2: got RPC_CLNT_CONNECT
19:29 kmai007 [2014-04-23 20:54:04.176166] D [client-handshake.c:185:client_start_ping] 0-dyn_cfu-client-2: returning as transport is already disconnected OR there are no frames (1 || 1)
19:29 kmai007 [2014-04-23 20:54:04.176216] D [client.c:2050:client_rpc_notify] 0-dyn_cfu-client-3: got RPC_CLNT_CONNECT
19:29 kmai007 [2014-04-23 20:54:04.176301] D [client-handshake.c:185:client_start_ping] 0-dyn_cfu-client-3: returning as transport is already disconnected OR there are no frames (1 || 1)
19:29 kmai007 [2014-04-23 20:54:04.176388] D [client-handshake.c:1693:server_has_portmap] 0-dyn_cfu-client-0: detected portmapper on server
19:30 kmai007 joined #gluster
19:30 kmai007 i'm really sorry
19:30 Ark joined #gluster
19:30 kmai007 flipping copy/paste wrong thing
19:30 jhp We have an application where data is generated in a cloud like platform. This cloud is spread in the DC and the customers can create documents and do queries against the application which can result in data or reports that are generated. All sessions are stateless.
19:30 kmai007 so sorry
19:30 semiosis kmai007: you should know better than to multiline paste in channel, you've been around a while :)
19:30 semiosis use pastie.org or similar
19:30 kmai007 buffer mistake
19:31 kmai007 is this safe? http://fpaste.org/96726/36764313/
19:31 glusterbot Title: #96726 Fedora Project Pastebin (at fpaste.org)
19:31 kmai007 ok thats what i wanted
19:31 jhp So when do a call the the platform to create some report, and my next call enters the system on a different DC, the data should be available.
19:32 kmai007 could possible be business as usual since i have DEBUG logging set
19:32 jhp We don't know on what server the next call will enter the cloud.
19:32 dbruhn jhp, you'll probably need to determine what is acceptable performance. It sounds like you need to use synchronous replication, which can impact performance
19:32 semiosis your latency may be low enough
19:32 dbruhn agreed
19:32 dbruhn I would say set up a test and see if the performance is acceptable
19:32 semiosis kmai007: does it work?
19:33 kmai007 yep its functioning, i'm trying to "recreate" and break it
19:33 kmai007 in my testing
19:33 kmai007 so i tried to move to production 2 weeks ago
19:33 kmai007 and shit hit the fan, now i'm trying to debug and recreate in my R&D
19:33 jhp The problem I'm wandering about is how to setup a pollicy that makes sure that data is writen to at least 2 DC's.
19:33 kmai007 only problem is, its not BREAKING
19:34 semiosis jhp: when you create a replicated volume data is written to all replicas in sync
19:34 dbruhn semiosis is corret
19:34 semiosis also correct
19:34 jhp semiosis: I know, but how many replica's do I need then?
19:34 dbruhn three if you want the data in each data center at all times
19:35 semiosis jhp: sounds like you'll want three, one in each dataceter
19:35 jhp When I say "replicate 2" those 2 could be in 1 DC.
19:35 semiosis well dont do that
19:35 dbruhn bhp, you build the volumes replication scheme when creating it, so it's more deliberate than that.
19:35 dbruhn jhp
19:35 jhp How do I force the Volume to spread in multiple DC's. Can I tell Gluster what bricks are in what DC?
19:35 semiosis you know where your servers are
19:36 semiosis put server1 in dc 1, server2 in dc 2, and server3 in dc 3, then gluster volume create replica 3 server1:/brick server2:/brick server3:/brick
19:37 jhp semiosis: Yes, but I have 12 servers. 4 in DC1, 4 in DC2 and 4 in DC3. They all have 3 disks of 400G each.
19:37 jhp So I have 36 bricks.
19:37 jhp Of those bricks I have 12 in every DC.
19:38 jhp When I say "replicate 3" everything could end up in 1 DC.
19:38 jhp Or am I missing something?
19:38 semiosis youre missing things
19:38 kmai007 can someone tell me what "graph" means in the gluster logs
19:38 dbruhn yep, so when you create a replica group, server1dc1:/brick1 server1dc2:brick1 server1dc3:/brick1 server2dc1:/brick2 server2dc2:/brick2 server2dc3:/brick2
19:39 semiosis kmai007: graph is the xlator stack
19:39 dbruhn then each brick1 would contain the same data in a replica 3
19:40 semiosis jhp: you should build a little gluster volume on some VMs to understand better, that's really the best way
19:40 semiosis if you run into trouble, we can help
19:40 wushudoin left #gluster
19:40 jhp semiosis: We have been doing this today and we saw things like that indeed.
19:41 jhp Was just not sure if this was by pure luck or as designed.
19:42 * semiosis afk
19:42 jhp So to understand it correctly, if I say replica 3 when creating the setup and I add all the bricks in the correct order, this should make sure that the data is spread between the 3 DC's.
19:43 dbruhn yep
19:44 jhp Sounds good. Then the next question. When I loose a server and I have to rebuild it and bring it back into the cluster. How do I tell it to take the correct location again? And what if I loose a DC for a few hours, and new data entres the cluster, how is it spread to the correct location when the 3rd DC returns?
19:45 jhp I would assume that this is possible?
19:46 dbruhn Those are two different questions.
19:46 jhp True :-)
19:46 dbruhn First yes you can replace a brick if it fails.
19:47 dbruhn I am not sure if this replace-brick command has been deprecated or not.
19:47 dbruhn checking on something quick
19:47 dbruhn http://gluster.org/community/documen​tation/index.php/Gluster_3.4:_Brick_​Restoration_-_Replace_Crashed_Server
19:47 glusterbot Title: Gluster 3.4: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
19:48 dbruhn second question, the self heal daemon will repopulate the brick in the case of files being missing, or if you really want to make sure they are there faster you can trigger heal processes on the files by stating them from the client side
19:50 jhp And during the time that the healing process has not finished yet, can I use the servers and will they get the data from other DC's?
19:51 jhp Or should I keep them out of the cluster?
19:55 dbruhn sec
19:57 von left #gluster
20:03 glusterbot New news from newglusterbugs: [Bug 1091079] Testing_Passwordless_SSH check in gverify.sh conflicts with documentation <https://bugzilla.redhat.co​m/show_bug.cgi?id=1091079>
20:15 dbruhn jhp, not sure what you mean?
20:16 rahulcs joined #gluster
20:16 dbruhn when a client requests data from a gluster volume that is using replication the default is to get the data from the first computer to respond
20:20 edong23 joined #gluster
20:25 andreask joined #gluster
20:27 ron-slc joined #gluster
20:29 jhp dbruhn: Well, when gluster volume is repairing the brick that has been away for some time what should I do with the traffic to my cloud. Should I make sure that these bricks are not accessed ?
20:29 jhp Is this possible at all?
20:29 dbruhn you don't have to worry about it
20:30 dbruhn when a client requests a file all three replicas respond to offer the file, the first to respond wins the race
20:30 dbruhn Once the files exist on the brick it will start serving them up when requested.
20:30 jhp Ah, so when the brick doesn't have the file yet it will just not offer it?
20:30 dbruhn can't offer what it doesn't have ;)
20:31 jhp True.
20:31 jhp And in a big setup like the one I describe, would you lean toward one big volume or would you try to split the data into multiple smaller volumes so you can keep the damage small when something happens?
20:32 dbruhn these are pretty relative terms, do you have a more accurate definition of big/small
20:33 rahulcs joined #gluster
20:33 jhp Well, would you go for 1 volume with 36 bricks of 400G and replica 3, or would you go for multiple volumes of 3 bricks of 100G each.
20:33 jhp If the data would give you this choice.
20:35 jhp This would ofcourse limit the storage volume to a 100G each, but this could be an option.
20:35 dbruhn I honestly don't know as I have never run across data center boundaries like you are going to. I break my volumes up based on performance requirements
20:37 dbruhn How many clients do you have at each DC?
20:38 jhp Well, the servers that host the brick are also the servers that access the data on a regular basis.
20:38 jhp They are part of the cloud.
20:39 jhp I plan to give them all access to the data using the fuse filesystem layer.
20:40 rahulcs joined #gluster
20:42 rahulcs_ joined #gluster
20:42 dbruhn Well the more client connections you throw at a distributed volume the better it will perform and scale. But milage may vary and testing is a great thing.
20:44 abyss joined #gluster
20:49 jhp dbruhn: Ok, then we have to start testing to see what it does.
20:50 dbruhn I am assuming you have the ability to test both methods and see what works best for you.
20:50 jhp Sure
20:51 jhp We have enough hardware on stock to start testing. The only tests we did to day were functional. What does it do and does it work.
20:51 jhp one last question, when does a volume become readonly?
20:52 jhp When you have replica 3 and you loose one of the replica's the volume is not readonly.
20:52 dbruhn If you are using quorum and you lose a server, if I remember correctly.
20:52 dbruhn By default the system doesn't become read only
20:59 Paul-C left #gluster
20:59 dbruhn jhp, I would be really interested in seeing your results, as I am seeing more people talking about using replication this way. Would you be willing to put your results in a blog post?
21:00 jhp Maybe.
21:01 dbruhn Well if you do, and you find some conclusive results on what works better for you, the community would appreciate the feedback.
21:02 jhp Sure.
21:02 jhp I understand.
21:03 Andy5__ JoeJulian: perhaps we have a winner for bug https://bugzilla.redhat.co​m/show_bug.cgi?id=1089758
21:03 glusterbot Bug 1089758: high, unspecified, ---, pkarampu, NEW , KVM+Qemu + libgfapi: problem dealing with failover of replica bricks causing disk corruption and vm failure.
21:04 Andy5__ [2014-04-24 20:51:20.625001] D [afr-transaction.c:440:afr_tr​ansaction_rm_stale_children] 0-gtest-replicate-0: Possible split-brain for 898f359f-1501-4896-a3d7-673c438f79d1
21:04 zerick joined #gluster
21:06 Ark joined #gluster
21:06 Gilbs1 left #gluster
21:09 _dist Andy5__: I get around that by using localhost as my gluster mount point (running kvm on same machine as gluster in replicate2)
21:10 andreask1 joined #gluster
21:10 Andy5__ I do the same, but after the first mount gluster seems to use the brick names instead.
21:10 _dist but, I only 'suspect' that the issue exists, I've seen odd things happen that pointed in that direction. Never taken the time to conclusively prove it
21:11 _dist I can, and do, take down the second brick in the replicate without issue. I'm running 3.4.2 though not 3.5x
21:11 _dist (or 3.4.3 as this bug is posted to).
21:12 Andy5__ Can you take them down alternatively? that is first one, then the other one after heal ? i have this problem on 3.4.2 and 3.4.3.
21:13 _dist So in my current setup right now I only have two machines, adding a third later. So I have 2 hypervisors, and a gluster replicate over 10Gbit. I can take down B with the idea that all qemu running on B will die, so I migrate first.
21:14 _dist If I take B down, I migrate all of Bs stuff to A obviously. Then when I bring B back up I need to wait for it to heal before ever moving stuff over to it again. Truthfully I expect if I did migrate VM 1 from A --> B before B heals, that it should work, but I'm worried that B won't "repick" it's local brick as the path when it becomes most efficient (after healing)
21:14 Andy5__ _dist: this is why it works for you. if you migrate then qemu re-establishes all proper connections and works (for me too). if you just kill the glusterfsd processes, it will not survive.
21:15 _dist Andy5__: Yes migration works, the problem seems to be (for me) that libgfapi never checks if a "better" path exists, or renegotiates it's when it's working path fails
21:15 zerick joined #gluster
21:15 _dist But, I don't have any concerete evidence for my "feelings", which at this point is all they are
21:16 _dist iirc I did prove it to myself at one point, but it wasn't a huge enough deal for me to make a fuss about it. JoeJulian and I spoke about it briefly say 1-2 months back
21:16 Andy5__ I'm doing debug builds of qemu and built a dedicated cluster to pinpoint the problem. now I hope someone in RH actually fixes it, as glusters internals is not my bread and butter.
21:17 Andy5__ JoeJulian mentioned about this issue, actually.
21:17 Andy5__ probably that was you :)
21:17 _dist Andy5__: I'm using a debian build of libgfapi, both my own compile of qemu and the stock debian one appeared to have the same issue.
21:18 Andy5__ However, at this point, it seems that libgfapi reestablishes the connections, but AFR kicks in on a unneeded heal and fails, and hence leaves the vm disk in "suspected" split-brain, obviously removing access to the file.
21:19 Andy5__ Actually, I suspect a race condition (perhaps with the logging process). I managed to disable logging of the bricks somehow, and the split brain did not occur.
21:20 _dist Andy5__: My real problem with vm disks is that heal info ALWAYS shows them healing, even when they aren't. It confuses me, but I remember JoeJulian convincing me it was a reporting issue with the xattr flags and not a data issue. He said that if heal info had working xml output I could use that (so I filed a bug for that one) <-- the missing xml output on heal info
21:20 Andy5__ _dist: are you also mounting the volume via fuse, in addition to using it via qemu ?
21:20 _dist Andy5__: I am, but I never use fuse for vm runtime, just for copying some files etc
21:21 foster joined #gluster
21:21 Andy5__ are your machines contantly writing on disk ?  if so, I have some that will show occasionally a need for heal, but actually they're fine. There's a bug regarding this issue on bugzilla.
21:22 _dist Andy5__: They do write a fair bit, and yes that's what triggers it. It's just trying because It's a lot of work to tell  if a machine has finished healing. Well, more than I'd like it to be :)
21:23 _dist I'd prefer to just have nagios parse the heal info/split-brain command every now and then, rather than do other fancy stuff. But there's always something to comaplin about it every solution :)
21:23 Andy5__ yes, gluster is somewhat on the rough edge. but much lighter on resources than ceph, so you can actually use your servers instead of having to dedicate them to osd.
21:24 _dist haven't tried ceph yet, it's on my to-do list :)
21:25 Andy5__ it works, but you can't use the osd for anything else. loadavg up in the 30+ with 4 osd per server during healing.
21:25 Andy5__ but if you have the hardware, it's rbd is much more solid than gluster's libgfapi.
21:26 _dist another day, another learning curve :)
21:26 _dist my problem is I run gluster --> zfs, ceph would functionally replcae zfs so I need much time for testing
21:26 _dist all the while, production keeps going, needs to stay up etc
21:27 Andy5__ your gluster on top of zfs? mine too.
21:28 _dist yeap, 2x(one for each replicate) zpool raidz3 8 drives each.
21:28 badone_ joined #gluster
21:29 Andy5__ ok, you don't want ceph on zfs. this is what I did and does not like it. also it likes to create partitions on each disk: one to journal and one for data.
21:29 _dist right, if I move to ceph I do ceph only, no zfs
21:29 Andy5__ now that is stupid: so you get your heads crazy jumping from one edge of the platter to the other.
21:30 _dist you can use SSD for journal, it's not that much different than ZIL
21:30 _dist but talking about ceph in gluster is bad form :)
21:30 Andy5__ yep.
21:34 _dist alright, well I'm out, it's supper time here. Good chatting with you
21:36 sjoeboo joined #gluster
21:38 Andy5__ see you next time.
22:15 purpleidea sauce_: actually, my code really isn't "beta" anymore... where does it say that?
22:16 purpleidea sauce_: and I actually _am_ working on debian/ubuntu support... if you ping me by monday, I'
22:16 purpleidea ll have something you can look at i think.
22:16 jhp Hi everyone. What does the following message in my logs on a test cluster mean:
22:16 jhp [2014-04-24 22:00:36.625073] E [index.c:267:check_delete_stale_index_file] 0-test-volume-index: Base index is not createdunder index/base_indices_holder
22:34 ira joined #gluster
22:40 jbrooks joined #gluster
22:47 nueces joined #gluster
22:55 Amanda joined #gluster
22:58 mjsmith2 joined #gluster
23:04 Psi-Jack_ joined #gluster
23:10 jbrooks joined #gluster
23:15 bennyturns joined #gluster
23:29 jbrooks joined #gluster
23:55 Philambdo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary