Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 pdrakeweb joined #gluster
00:26 dtyarnell joined #gluster
00:58 spandit joined #gluster
01:11 Oneiroi joined #gluster
01:46 _ilbot joined #gluster
01:46 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:29 davinder joined #gluster
02:40 vshankar joined #gluster
02:41 spandit joined #gluster
02:54 social joined #gluster
03:03 kshlm joined #gluster
03:15 bharata-rao joined #gluster
03:28 ppai joined #gluster
03:35 shubhendu joined #gluster
03:39 shylesh joined #gluster
03:40 raghu joined #gluster
03:46 dusmant joined #gluster
03:50 sgowda joined #gluster
03:56 anands joined #gluster
04:13 spandit joined #gluster
04:15 itisravi joined #gluster
04:21 RameshN joined #gluster
04:25 rjoseph joined #gluster
04:27 ngoswami joined #gluster
04:28 kPb_in_ joined #gluster
04:43 kPb_in joined #gluster
04:48 shubhendu joined #gluster
04:53 nshaikh joined #gluster
04:55 bala joined #gluster
05:01 vpshastry joined #gluster
05:07 nueces joined #gluster
05:15 itisravi joined #gluster
05:20 rc10 joined #gluster
05:23 bulde joined #gluster
05:28 mohankumar joined #gluster
05:38 ndarshan joined #gluster
05:39 satheesh joined #gluster
05:40 kanagaraj joined #gluster
05:45 lalatenduM joined #gluster
05:47 al joined #gluster
05:50 shubhendu joined #gluster
05:50 bulde joined #gluster
06:01 rastar joined #gluster
06:03 vimal joined #gluster
06:10 ababu joined #gluster
06:12 vimal joined #gluster
06:12 psharma joined #gluster
06:15 sgowda joined #gluster
06:20 harish_ joined #gluster
06:21 45PAAA0XM joined #gluster
06:29 jtux joined #gluster
06:37 vimal joined #gluster
06:38 kPb_in joined #gluster
06:47 pkoro joined #gluster
06:49 shruti joined #gluster
06:54 ricky-ticky joined #gluster
06:59 sgowda joined #gluster
07:00 bulde joined #gluster
07:03 davinder joined #gluster
07:04 ctria joined #gluster
07:04 ninkotech joined #gluster
07:04 fidevo joined #gluster
07:05 eseyman joined #gluster
07:06 ninkotech_ joined #gluster
07:12 shane_ joined #gluster
07:12 compbio_ joined #gluster
07:13 JonathanS joined #gluster
07:13 twx_ joined #gluster
07:13 RichiH_ joined #gluster
07:14 SteveCooling Hi guys. I'm running glusterfs-3.4.0-8 on CentOS 6.4 (64-bit). Experiencing what seems like a memory leak. dmesg shows a bunch of "swapper: page allocation failure" on all nodes. no _apparent_ practical problems so far. heres a graph of memory usage: https://dl.dropboxusercontent.co​m/u/683331/gluster-node-mem.png   Is this a known problem? If so, will 3.4.1 help?
07:14 glusterbot <http://goo.gl/su8M5M> (at dl.dropboxusercontent.com)
07:16 ngoswami joined #gluster
07:16 _br_ joined #gluster
07:16 basic- joined #gluster
07:17 keytab joined #gluster
07:18 SteveCooling As can be seen on the "by year" part, the problem started in mid August. That's when we upgraded to 3.4.0.
07:18 basic` joined #gluster
07:29 andreask joined #gluster
07:45 micu2 joined #gluster
07:52 mgebbe_ joined #gluster
07:54 rc10 am using 3.4.1 , there is no migrate-data in rebalance
07:55 rc10 how can i  rebalanace data  ?
07:58 StarBeast joined #gluster
08:11 yongtaof joined #gluster
08:12 tryggvil joined #gluster
08:27 rgustafs joined #gluster
08:30 pkoro joined #gluster
08:31 tziOm joined #gluster
08:36 atrius joined #gluster
08:40 ababu joined #gluster
08:43 KORG|2 joined #gluster
08:50 mooperd joined #gluster
08:56 yongtaof joined #gluster
09:03 vpshastry joined #gluster
09:05 jtux joined #gluster
09:08 glusterbot New news from newglusterbugs: [Bug 1016482] Owner of some directories become root <http://goo.gl/fJd8d3>
09:10 santir joined #gluster
09:11 tryggvil joined #gluster
09:25 davinder2 joined #gluster
09:29 NuxRo anyone knows if this made it in 3.4.1? http://review.gluster.org/#/c/6029/
09:29 glusterbot Title: Gerrit Code Review (at review.gluster.org)
09:30 vpshastry1 joined #gluster
09:31 ndevos NuxRo: doesnt look like it, the change it not marked as 'merged' and https://bugzilla.redhat.com/1012400 does not have any replacements for it (merged in master though)
09:31 glusterbot Title: Bug 1012400 Problems with activating nfs.enable-ino32 (at bugzilla.redhat.com)
09:32 NuxRo ndevos: thanks
09:33 NuxRo ndevos: is there a release announcement for 3.4.1 with what's new?
09:35 ndevos NuxRo: I dont know, I have not seen one. It should not be a new-features release, more of a bugfix one, so maybe there are no release notes for it?
09:36 rc10 joined #gluster
09:36 NuxRo looks like it
09:38 hagarth joined #gluster
09:39 glusterbot New news from newglusterbugs: [Bug 1016494] Volume status operation after remove-brick is started on a volume fails, until remove-brick commit or remove-brick stop is done. <http://goo.gl/emig6q>
09:44 harish_ joined #gluster
09:47 rc10 when a node fails,  and writes happen, disk size on failed node is inconsistent
09:47 rc10 tho, files are synced
09:48 sgowda joined #gluster
09:49 rastar joined #gluster
09:52 ppai joined #gluster
09:53 Staples84 joined #gluster
09:58 Shdwdrgn joined #gluster
10:02 Guest68939 is there way to find out which brick is in which subvolume?
10:08 social JoeJulian: do you use replace-brick with 3.4.0 ?
10:11 hagarth joined #gluster
10:15 nshaikh joined #gluster
10:28 sgowda joined #gluster
10:30 ngoswami joined #gluster
10:30 kkeithley1 joined #gluster
10:30 vpshastry1 joined #gluster
10:35 rc10 Hi,  I  have milion of small files - which is the  good choice with gluster - ext4  or xfs   ?
10:37 rc101 joined #gluster
10:37 kanagaraj joined #gluster
10:40 edward1 joined #gluster
10:45 shubhendu joined #gluster
10:49 andreask joined #gluster
10:51 Alpinist joined #gluster
10:52 ababu joined #gluster
10:52 ngoswami joined #gluster
10:53 ndarshan joined #gluster
10:59 dusmant joined #gluster
10:59 psharma joined #gluster
11:00 jtux joined #gluster
11:08 NuxRo ndevos: is there a list of bugs fixed in 3.4.1?
11:09 vpshastry1 joined #gluster
11:10 NuxRo ndevos: nevermind, found the changelog file :)
11:18 RameshN joined #gluster
11:31 ababu joined #gluster
11:31 ndarshan joined #gluster
11:35 spandit joined #gluster
11:38 dusmant joined #gluster
11:41 kanagaraj joined #gluster
11:43 raar joined #gluster
11:48 ppai joined #gluster
11:49 rjoseph joined #gluster
11:55 eseyman joined #gluster
11:59 AliRezaTaleghani joined #gluster
11:59 AliRezaTaleghani can I start glusteFS just on one node?
12:01 rwheeler joined #gluster
12:04 rc101 AliRezaTaleghani: yes
12:04 ngoswami joined #gluster
12:06 nshaikh joined #gluster
12:09 badone joined #gluster
12:09 rastar joined #gluster
12:13 raz joined #gluster
12:13 raz is there a limit on the size of a single gluster volume?
12:19 gyutyuglf joined #gluster
12:21 psharma joined #gluster
12:21 dusmant joined #gluster
12:28 monotek joined #gluster
12:29 monotek hi
12:29 glusterbot monotek: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:31 monotek if i run "gluster volume heal gv0 info" i get the follwoing output: http://pastebin.com/cFZZmTik
12:31 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:33 monotek does this mean its something wrong? because wsus.qcow2 is only 1 time available?
12:33 monotek its a replicated & distributed volume with 6 bricks (2 replicas).
12:34 samppah monotek: do you get that same output everytime?
12:35 monotek yes, on this node. other node show other result...
12:35 monotek i allready tried to run rebalance...
12:35 monotek seems not to help...
12:36 monotek but i want to know first, if my assumption is right, that there have to be 2 entries of wsus.qcow2, wehn replica is set to 2?
12:45 Alpinist joined #gluster
12:52 B21956 joined #gluster
12:55 abradley joined #gluster
12:59 AliRezaTaleghani left #gluster
13:00 ricky-ticky joined #gluster
13:03 gyutyuglf left #gluster
13:03 badone joined #gluster
13:12 abradley joined #gluster
13:20 dtyarnell joined #gluster
13:26 H__ joined #gluster
13:28 H__ Question: How does one repair split-brain on directories ? (in contrast to http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ for files)
13:28 glusterbot <http://goo.gl/FPFUX> (at joejulian.name)
13:34 rgustafs joined #gluster
13:36 andreask joined #gluster
13:40 eseyman joined #gluster
13:44 bala joined #gluster
13:46 jclift joined #gluster
13:46 itisravi_ joined #gluster
13:56 kaptk2 joined #gluster
13:59 zaitcev joined #gluster
14:00 jcsp joined #gluster
14:00 chirino joined #gluster
14:12 bet_ joined #gluster
14:14 bugs_ joined #gluster
14:20 monotek joined #gluster
14:21 monotek left #gluster
14:22 wushudoin joined #gluster
14:23 JoeJulian H__: Good question. Pick a "good" one and reset the trusted.afr values on the other(s).
14:25 JoeJulian raz: The only limit is the number and maximum filesystem size of the bricks that you add to the volume. Works out to around 5 brontobytes.
14:26 JoeJulian social: I haven't had an opportunity do use replace-brick with 3.4 yet.
14:27 JoeJulian SteveCooling: Do you use georeplication?
14:28 social JoeJulian: well tests passed well.
14:29 jclift joined #gluster
14:30 pkoro joined #gluster
14:30 jclift left #gluster
14:30 jclift joined #gluster
14:30 abradley how is this possible: glusterfs1 giving error that there is an existing volume but there is not: http://paste.ubuntu.com/6209546/
14:30 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:31 jclift left #gluster
14:37 tryggvil joined #gluster
14:39 jclift joined #gluster
14:44 rc10 joined #gluster
14:48 harish_ joined #gluster
14:56 abradley here's a better paste: http://paste.ubuntu.com/6209647/
14:56 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:56 abradley how is that possible?
14:58 andreask you already used this dir as brick?
15:01 shylesh joined #gluster
15:03 andreask abradley: JoeJulian made a nice blogpost about that problem http://goo.gl/YUzrh
15:03 glusterbot Title: GlusterFS: {path} or a prefix of it is already part of a volume (at goo.gl)
15:06 sprachgenerator joined #gluster
15:06 jag3773 joined #gluster
15:10 glusterbot New news from newglusterbugs: [Bug 990028] enable gfid to path conversion <http://goo.gl/1HwiQc>
15:16 spandit joined #gluster
15:24 jmalm joined #gluster
15:25 jclift left #gluster
15:28 Technicool joined #gluster
15:33 LoudNoises joined #gluster
15:36 jmalm I am having some split brain issues.  I have run the commands to fix that, but when trying to do anything with the file in the file system after that, it returns with "Invalid argument"
15:36 anands joined #gluster
15:36 dbruhn joined #gluster
15:38 monotek joined #gluster
15:46 thanasisk joined #gluster
15:46 thanasisk are there specific gluster apt repositories?
15:46 semiosis thanasisk: ,,(latest)
15:46 glusterbot thanasisk: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
15:46 semiosis thanasisk: what distro?
15:47 thanasisk semiosis, debian 7
15:47 semiosis http://download.gluster.org/pub/​gluster/glusterfs/LATEST/Debian/
15:47 glusterbot <http://goo.gl/l2Ml1> (at download.gluster.org)
15:47 thanasisk cheers
15:47 phox joined #gluster
15:47 semiosis glusterbot: version
15:47 glusterbot semiosis: The current (running) version of this Supybot is 0.83.4.1+limnoria installed on 2013-07-15T22:40:38+0000. The newest versions available online are 2013-10-06T14:26:03 (in testing), 2013-09-11T17:27:10 (in master).
15:48 semiosis JoeJulian: when i upgraded to the latest limnoria the goo.gl shortener broke
15:48 phox hahaha oops apparently there's a #/join
15:48 phox for those of us who type erratically
15:52 vpshastry joined #gluster
15:53 * phox smears Crisco on glusterbot
15:56 toad joined #gluster
15:59 rotbeard joined #gluster
16:01 mohankumar joined #gluster
16:07 vpshastry left #gluster
16:21 badone joined #gluster
16:24 Lethalman joined #gluster
16:27 t35t0r joined #gluster
16:27 t35t0r joined #gluster
16:28 Mo_ joined #gluster
16:29 abradley here's a better paste: http://paste.ubuntu.com/6209647/ How is this possible? These are two brand new ubuntu 12 server setups and this isn't a "reusing" bricks scenario.
16:29 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
16:39 purpleidea abradley: the first time you ever ran a create volume and it failed, it could have partially made some files. clear out the bricks to be sure.
16:39 purpleidea abradley: also check your dns is working properly
16:42 abradley I can ping "glusterfs1" and 2
16:42 abradley I'll try and find how to clear out bricks
16:45 anands joined #gluster
16:57 hagarth joined #gluster
17:04 tryggvil joined #gluster
17:10 lalatenduM joined #gluster
17:20 hagarth1 joined #gluster
17:29 dtyarnell joined #gluster
17:32 verdurin joined #gluster
17:41 jbrooks joined #gluster
17:44 mistich1 is there any performance setting I need to set for a 10gig network with gluster
17:44 semiosis jumbo frames
17:44 semiosis just guessing
17:45 GLHMarmot joined #gluster
17:49 hagarth joined #gluster
17:56 phox jumbo frames are actually kinda stupid
17:57 phox what you actually care about is all of the other TCP parameters, buffer size etc
17:57 phox mostly IIRC set in sysctl.conf
17:57 phox also you might want to encourage various applications to not use tiny read/write sizes, and even not use lots of tiny files, because that's going to be slow over any speed of network
17:58 mistich1 so nothing in gluster needs to be set
17:58 phox oh one thing
17:58 phox which I wish could go higher
17:58 phox performance.read-ahead-blocks or wahtever it is
17:58 phox there's a couple of others but I think that's the most significant
17:58 phox perf will severely suck without it if doing large sequential reads
17:59 mistich1 im trying to use gluster for rrd files
17:59 phox performance.read-ahead-page-count 16
17:59 phox heh
17:59 phox fortunately RRD files are not typically very high throughput period
17:59 phox so having lots of them is not gonna kill you
17:59 phox OTOH do consider depending how often they're rotated out that lots of files in a single dir can list kinda slowly
18:00 phox gluster 3.4 is a lot less bad for that, but still not spectacular
18:01 mistich1 using 3.4  and when I run directly on file system I get
18:01 mistich1 Create     10 rrds      3 c/s (0.00125 sdv)   Update     10 rrds   37672 u/s (0.00001 sdv)
18:01 mistich1 on gluster mount using fuse I get
18:01 mistich1 Create     10 rrds      3 c/s (0.04224 sdv)   Update     10 rrds     163 u/s (0.00596 sdv)
18:01 mistich1 this is running perftest.pl from rrdtool
18:02 mistich1 you can see the u/s updates a second is really low on gluster mount
18:02 mistich1 I expected to lose some but not 37500 per second
18:04 purpleidea phox: jumbo frames aren't stupid at all... it just depends on your workloads if you'll need them and if they'll help or not.
18:04 purpleidea mistich1: i'd recommend testing with and without. use something like iptraf to see how many jumbo frame packets are sent, and if it's significant, you'll know it probably helped.
18:05 purpleidea phox: that's not to say other things aren't helpful to tune also, but jumbo can help depending on what you're pushing
18:14 phox TSO and friends generally make jumbo frames pretty irrelevant
18:17 mistich1 iptraf without jumbo frames got 9 Gig of throughput
18:18 johnmark phox: our perf guys have found that jumbo frames make a big difference
18:18 mistich1 but will check how many jumbo frames
18:18 mistich1 johnmark but I don't think that is going to improve my situation that much
18:19 mistich1 I am missing something or gulster cannot handle rrd updates
18:21 johnmark mistich1: hrm. ok. what tool is doing the RRD updates?
18:22 mistich1 right now using  perftest.pl to test with but will be using zenoss in the final implementation
18:23 mistich1 perftest.pl is in the contrib dir of rrdtool
18:24 mistich1 here is the output running it on the gluster server 10 rrds   37672 u/s
18:24 mistich1 and here is the output from the gluster mounted using fuse 10 rrds     202 u/s
18:24 mistich1 there is a big difference
18:28 johnmark mistich1: 'tis true. and RRDs are essentially text files, yes?
18:29 lkoranda joined #gluster
18:30 semiosis johnmark: rrd is a binary database file
18:31 semiosis so, all the usual issues with storing binary database files on gluster come into play
18:31 semiosis :(
18:31 JoeJulian with regard to  <phox> performance.read-ahead-page-count: raising it is useful if you have the memory to do so when you have a select few files per server that are being served. If your application uses a lot of disparate files, then not so much.
18:33 johnmark semiosis: hrm... why did I think it was essentially a formatted text file...
18:33 davinder joined #gluster
18:34 phox johnmark: did they actually set all of the other non-problem-prone parameters too?  heh.
18:34 semiosis johnmark: sorry there's no unix command i can easily run to answer *that* question
18:34 johnmark semiosis: lol
18:34 johnmark "42"
18:34 semiosis hahaha
18:34 johnmark :)
18:34 phox JoeJulian: we've found througput without that absolutely dismal, and I would expect it to improve somewhat (although not linearly) if we could increase it to say 64
18:38 JoeJulian I'm just pointing out that that's use case specific. For my use case it would perform worse.
18:39 semiosis JoeJulian: fyi, when i updated logstashbot the other day the goo.gl shortener stopped working. you might encounter the same when you update glusterbot
18:40 semiosis JoeJulian: the expanded url has double http:// on the front
18:42 JoeJulian I'll take a look later today.
18:42 semiosis no big deal, i just switched to tinyurl
18:42 JoeJulian I think I remember fixing that bug once....
18:44 sprachgenerator joined #gluster
18:44 semiosis afk
18:55 mistich1 ok jumbo frames did not help
18:56 JonathanD joined #gluster
18:56 rwheeler_ joined #gluster
18:58 mistich1 who could I ask at gluster about what settings I should be using?
18:58 coredump_ joined #gluster
18:59 rwheeler__ joined #gluster
19:04 abradley what is the best way to make a gluster volume accessible to windows
19:04 mistich1 well folks thanks for the help will try again tomorrow
19:04 abradley It looks like mounting a nfs share in windows 7 pro isn't possible
19:05 jag3773 joined #gluster
19:07 johnmark mistich1: sure. wish we could be more helpful. I am curious to see if this is a use case we can support
19:11 mistich1 well about 1 million rrd files
19:12 JoeJulian mistich1: You're asking us. :) We're the "who". If you'd like to spend money for someone to look at your use case and make a determination, consider the ,,(commercial) product.
19:12 glusterbot mistich1: Commercial support of GlusterFS is done as Red Hat Storage, part of Red Hat Enterprise Linux Server: see https://www.redhat.com/wapps/store/catalog.html for pricing also see http://www.redhat.com/products/storage/ .
19:13 JoeJulian abradley: I've mounted NFS in windows 7 before.... been forever since I did it though...
19:13 JoeJulian abradley: There's also api support in samba4. You could run a samba server serving from GlusterFS.
19:14 JoeJulian "<mistich1> do you work for redhat/gluster?" No. I am a ,,(volunteer).
19:14 glusterbot A person who voluntarily undertakes or expresses a willingness to undertake a service: as one who renders a service or takes part in a transaction while having no legal concern or interest or receiving valuable consideration.
19:15 mistich1 ok might just have to contact support and see what they say
19:16 JoeJulian abradley: Ah, right... I was using ultimate... Maybe http://www.labf.com/nfsaxe/index.html ?
19:16 mistich1 thanks everyone for the help
19:16 glusterbot Title: nfsAxe - Windows NFS Client and NFS Server for Windows (at www.labf.com)
19:17 JoeJulian mistich1: Good luck. Let us know how it turns out.
19:21 shane_ hi all, using gluster 3.4.1 i'm not having any luck mounting distributed volumes over rdma. single brick volumes work fine. this is on debian wheezy using stock ofed packages for infiniband (1.4.2). attempts to mount distributed volumes just hang.
19:21 semiosis shane_: please ,,(pasteinfo) and also the client log
19:21 glusterbot shane_: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:25 shane_ gluster volume info: http://fpaste.org/45234/26015713/
19:25 glusterbot Title: #45234 Fedora Project Pastebin (at fpaste.org)
19:26 shane_ when i specify log-file in the mount options on the client, that file is never created
19:26 shane_ but this appears in the gluster server's volume log:
19:26 shane_ http://fpaste.org/45236/60306138/
19:26 glusterbot Title: #45236 Fedora Project Pastebin (at fpaste.org)
19:29 semiosis shane_: gluster version?  distro?
19:29 shane_ gluster 3.4.1, debian wheezy
19:33 semiosis shane_: gluster client makes a log file (by default) in /var/log/glusterfs/the-mount-point.log
19:33 semiosis i think we need to see the client log
19:34 semiosis though idk anything about rdma, so doubt i'll be much help
19:38 shane_ thanks, here's the client log: http://fpaste.org/45240/26108813/
19:38 glusterbot Title: #45240 Fedora Project Pastebin (at fpaste.org)
19:41 shane_ i notice that client log says "Using Program GlusterFS 3.3" even though both client and server have 3.4.1
19:46 l0uis shane_: what does gluster volume status show ?
19:50 shane_ l0uis: looks good to me http://fpaste.org/45250/38126179/
19:50 glusterbot Title: #45250 Fedora Project Pastebin (at fpaste.org)
19:50 phil___ joined #gluster
19:50 abradley if one node goes down how does gluster offer high availability?
19:50 semiosis abradley: whats a node, exactly?  ,,(glossary)
19:50 glusterbot abradley: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
19:51 abradley aaah, so for high availability you must use a client?
19:51 abradley I had a hypervisor setup to access over nfs directly to node1
19:51 semiosis abradley: generally speaking the easiest way to get HA is to use a fuse client, which connects directly to all bricks in a volume.  when the volume uses replication, this can deliver HA
19:51 abradley cool, great info, thanks
19:59 l0uis shane_: not sure ... the 3.3 reference in the client log would make me suspicious. you sure all versions are up to date and daemons restarted etc?
19:59 l0uis shane_: did you recently update?
20:02 jag3773 joined #gluster
20:05 lpabon joined #gluster
20:25 shane_ l0uis: all involved servers are fresh wheezy installs with gluster 3.4.1 installed from the gluster debian repo
20:25 abradley Is there a guide for setting up gluster-client on ubuntu server? I'm having issue finding such.
20:27 shane_ l0uis: so it seems unlikely that a gluster 3.3 component would have been installed somehow, although anything is possible
20:41 glusterbot New news from newglusterbugs: [Bug 1012863] Gluster fuse client checks old firewall ports <http://goo.gl/3UsZxe>
20:42 abradley I've got a gluster cluster, and I've setup a new node (ubuntu 12 server). how do I add a partition on it to be a "brick" in the gluster-cluster volume?
20:43 abradley the gluster-cluster already has a rep volume
20:43 abradley "vol1"
20:45 badone joined #gluster
20:48 semiosis see add-brick in the ,,(rtfm)
20:48 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
20:50 badone joined #gluster
20:50 shane_ is there someone specific i should chase down for rdma bugs? perhaps raghavendra?
20:56 l0uis shane_: sorry, i am not rdma savvy. wish I could help more.
20:57 shane_ no worries, it's a pretty arcane subject. thanks for the help.
21:07 _br_ joined #gluster
21:08 sac`away joined #gluster
21:15 mooperd joined #gluster
21:28 mooperd joined #gluster
22:02 shane_ changing the transport type of a volume (gluster 3.4.1) using "gluster volume set <VOL> transport tcp" doesn't seem to be working for me as described here: http://review.gluster.org/#/c/4008/
22:02 glusterbot Title: Gerrit Code Review (at review.gluster.org)
22:03 shane_ in volume info "config.transport: tcp" shows up under "Options reconfigured" but the transport remains rdma
22:03 shane_ is there something i need to do to commit or reload the reconfigured options?
22:08 semiosis no they're updated in real time
22:09 shane_ so this should never happen? http://fpaste.org/45293/38127018/
22:09 glusterbot Title: #45293 Fedora Project Pastebin (at fpaste.org)
22:10 semiosis no idea, sorry
22:10 semiosis i've never messed with rdma
22:10 semiosis afk
22:55 jag3773 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary