Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-02-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 rfortier1 joined #gluster
00:11 jobewan joined #gluster
00:11 shyam joined #gluster
00:20 RicardoSSP joined #gluster
00:47 overclk joined #gluster
00:59 vpshastry joined #gluster
01:08 bala joined #gluster
01:20 pixelgremlins joined #gluster
01:22 vpshastry joined #gluster
01:27 pixelgremlins_ba joined #gluster
01:35 JordanHackworth joined #gluster
01:48 ulimit joined #gluster
01:52 harish joined #gluster
02:15 lyang0 joined #gluster
02:21 glusterbot New news from newglusterbugs: [Bug 1064096] The old Python Translator code (not Glupy) should be removed <https://bugzilla.redhat.com/show_bug.cgi?id=1064096>
02:34 recidive joined #gluster
02:35 davinder joined #gluster
02:37 bharata-rao joined #gluster
02:38 harish joined #gluster
02:39 glusterbot New news from resolvedglusterbugs: [Bug 1023667] The Python libgfapi API needs more fops <https://bugzilla.redhat.com/show_bug.cgi?id=1023667>
02:51 glusterbot New news from newglusterbugs: [Bug 1049981] 3.5.0 Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1049981>
03:01 eshy joined #gluster
03:05 edong23 joined #gluster
03:09 semiosis joined #gluster
03:11 _dist joined #gluster
03:12 _dist Matthaeus: you here ?
03:12 edong23 joined #gluster
03:14 Matthaeus Yeah, what's up?
03:15 _dist I destroyed my cluster on proxmox, I'm trying to do another cluster over my san, think you could walk me through it? It seems simple but I tried twice and failed both times
03:17 failshell joined #gluster
03:18 jporterfield joined #gluster
03:18 _dist Matthaeus: my san network is bond of eth8/9 on both nodes. Both nodes have host entries for themselves and each other on the SAN network (different from their names on vmbr0)
03:19 shubhendu joined #gluster
03:22 glusterbot New news from newglusterbugs: [Bug 1064103] add-brick lead to "open(....O_CREAT..) return No such <https://bugzilla.redhat.com/show_bug.cgi?id=1064103> || [Bug 1064108] add-brick lead to "open(....O_CREAT..) return No such file or directory" <https://bugzilla.redhat.com/show_bug.cgi?id=1064108>
03:25 Matthaeus _dist: I'm actually about to stagger home.  It's the end of a 12-hour day for me.
03:25 _dist no trouble, if there's a doc you can point me at? if not, I can wait until tommorrow :)
03:27 Matthaeus It wasn't particularly hard.
03:27 lyang0 joined #gluster
03:27 Matthaeus What error are you getting?
03:28 Matthaeus Check private messages for contact info; I need to head out.
03:30 failshel_ joined #gluster
03:34 k10 joined #gluster
03:36 hagarth1 joined #gluster
03:38 rfortier joined #gluster
03:46 itisravi joined #gluster
03:47 shyam joined #gluster
03:47 tokik joined #gluster
03:58 KyleG joined #gluster
03:58 KyleG joined #gluster
04:04 saurabh joined #gluster
04:06 vpshastry joined #gluster
04:13 ndarshan joined #gluster
04:26 rfortier joined #gluster
04:41 XpineX joined #gluster
04:41 kdhananjay joined #gluster
04:46 meghanam joined #gluster
04:48 jporterfield joined #gluster
04:50 shylesh joined #gluster
04:57 bala joined #gluster
05:07 ppai joined #gluster
05:07 failshell joined #gluster
05:15 prasanth joined #gluster
05:16 CheRi joined #gluster
05:19 zaitcev joined #gluster
05:31 nshaikh joined #gluster
05:35 yinyin joined #gluster
05:40 glusterbot New news from resolvedglusterbugs: [Bug 1064103] add-brick lead to "open(....O_CREAT..) return No such <https://bugzilla.redhat.com/show_bug.cgi?id=1064103>
05:41 kanagaraj joined #gluster
05:46 rastar joined #gluster
06:00 _NiC joined #gluster
06:07 surabhi joined #gluster
06:17 bulde joined #gluster
06:23 dusmant joined #gluster
06:25 lalatenduM joined #gluster
06:28 mohankumar joined #gluster
06:28 vimal joined #gluster
06:31 Philambdo joined #gluster
06:34 spiekey joined #gluster
06:35 RameshN_ joined #gluster
06:44 RameshN joined #gluster
06:47 raghu joined #gluster
06:52 glusterbot New news from newglusterbugs: [Bug 1054696] Got debug message in terminal while qemu-img creating qcow2 image <https://bugzilla.redhat.com/show_bug.cgi?id=1054696>
06:57 mohankumar__ joined #gluster
07:10 ngoswami joined #gluster
07:17 jtux joined #gluster
07:20 spiekey joined #gluster
07:24 spiekey_ joined #gluster
07:38 DV__ joined #gluster
07:43 solid_liq joined #gluster
07:43 solid_liq joined #gluster
07:55 rossi_ joined #gluster
08:02 ktosiek joined #gluster
08:06 ctria joined #gluster
08:07 tryggvil joined #gluster
08:15 harish joined #gluster
08:16 keytab joined #gluster
08:19 eseyman joined #gluster
08:40 hchiramm_ joined #gluster
08:43 franc joined #gluster
08:46 bulde joined #gluster
08:46 mbukatov joined #gluster
08:49 tryggvil joined #gluster
09:03 mgebbe_ joined #gluster
09:17 jporterfield joined #gluster
09:17 ninkotech__ joined #gluster
09:18 pixelgremlins joined #gluster
09:19 liquidat joined #gluster
09:25 shyam joined #gluster
09:26 tryggvil joined #gluster
09:29 aravindavk joined #gluster
09:29 ninkotech__ joined #gluster
09:37 shyam joined #gluster
09:39 calum_ joined #gluster
09:42 baoboa joined #gluster
09:48 muhh joined #gluster
09:50 bulde_ joined #gluster
10:04 bulde joined #gluster
10:18 bulde joined #gluster
10:25 shyam joined #gluster
10:29 khushildep joined #gluster
10:33 diegows joined #gluster
10:36 Slash joined #gluster
10:53 qdk joined #gluster
11:09 shubhendu joined #gluster
11:10 b0e joined #gluster
11:15 itisravi joined #gluster
11:19 aravindavk joined #gluster
11:30 shubhendu joined #gluster
11:33 ababu joined #gluster
11:35 harish joined #gluster
11:40 dusmant joined #gluster
11:48 ira joined #gluster
11:49 b0e1 joined #gluster
11:50 jporterfield joined #gluster
11:52 ndarshan joined #gluster
11:54 andreask joined #gluster
11:57 recidive joined #gluster
11:57 aravindavk joined #gluster
12:07 tokik joined #gluster
12:15 tokik joined #gluster
12:20 CheRi joined #gluster
12:23 edward1 joined #gluster
12:29 pdrakeweb joined #gluster
12:31 pixelgremlins_ba joined #gluster
12:31 chirino joined #gluster
12:32 ndarshan joined #gluster
12:36 hagarth joined #gluster
12:40 bennyturns joined #gluster
12:43 andreask joined #gluster
12:43 lalatenduM hagarth, ping
12:45 y4m4 joined #gluster
12:45 lalatenduM hagarth, I may not be able to attend today's community meeting, just wanted to give a heads up
12:48 hagarth @later tell lalatenduM ok, noted your message.
12:48 glusterbot hagarth: The operation succeeded.
12:50 vipulnayyar joined #gluster
12:54 bulde joined #gluster
13:01 marbu joined #gluster
13:02 recidive joined #gluster
13:08 ababu joined #gluster
13:20 hchiramm__ joined #gluster
13:27 hagarth joined #gluster
13:34 the-me joined #gluster
13:39 ppai joined #gluster
13:44 ira joined #gluster
13:48 sroy joined #gluster
13:50 sprachgenerator joined #gluster
13:56 hagarth joined #gluster
13:56 jmarley joined #gluster
14:07 bala joined #gluster
14:08 primechu_ joined #gluster
14:10 sprachgenerator joined #gluster
14:12 mbukatov joined #gluster
14:15 B21956 joined #gluster
14:17 vipulnayyar joined #gluster
14:19 japuzzo joined #gluster
14:22 pdrakeweb joined #gluster
14:33 ndk joined #gluster
14:41 rwheeler joined #gluster
14:44 spiekey joined #gluster
14:44 spiekey Hello!
14:44 glusterbot spiekey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:44 spiekey i need some help changing the brick address of a node :)
14:45 theron joined #gluster
14:46 theron joined #gluster
14:46 spiekey this is my current gluster: http://pastebin.com/fU8Brpvc
14:46 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:46 failshell joined #gluster
14:46 spiekey http://fpaste.org/76555/22163411/
14:46 glusterbot Title: #76555 Fedora Project Pastebin (at fpaste.org)
14:47 spiekey now i would like to change node2.localnet to node2.bond AND node3.localnet to node3.bond
14:49 theron_ joined #gluster
14:50 b0e joined #gluster
14:50 failshell joined #gluster
14:51 failshel_ joined #gluster
14:55 dbruhn joined #gluster
14:55 tdasilva joined #gluster
14:57 zaitcev joined #gluster
14:59 kanagaraj joined #gluster
15:00 vipulnayyar joined #gluster
15:01 andreask joined #gluster
15:03 spiekey anyone?
15:08 primechuck joined #gluster
15:10 dbruhn spiekey, missed the question, what's up?
15:13 spiekey this is my current gluster: http://fpaste.org/76555/22163411/
15:13 glusterbot Title: #76555 Fedora Project Pastebin (at fpaste.org)
15:13 spiekey now i would like to change node2.localnet to node2.bond AND node3.localnet to node3.bond
15:13 spiekey (since i want it to use my dedicated bonding interface)
15:13 spiekey how do i do that?
15:15 dusmant joined #gluster
15:16 semiosis spiekey: this is why we recommend using hostnames, so you dont have to reconfigure glusterfs to change addresses -- just update the hostname map (in DNS, of course!)
15:16 semiosis if you're going to reconfig gluster, you should set up hostnames dedicated for gluster -- gluster1.mydomain, etc
15:16 semiosis you can find help replacing server addresses here: ,,(replace)
15:16 glusterbot here: Useful links for replacing a failed server... if replacement server has different hostname: http://web.archive.org/web/20120508153302/http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/ ... or if replacement server has
15:16 glusterbot same hostname: http://goo.gl/rem8L
15:16 glusterbot Useful links for replacing a failed server... if replacement server has different hostname: http://web.archive.org/web/20120508153302/http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/ ... or if replacement server has same
15:16 glusterbot hostname: http://goo.gl/rem8L
15:16 dbruhn semiosis, he can just re-probe the servers and restart glusterd correct?
15:16 semiosis that was weird
15:17 semiosis dbruhn: afaik that wont update the brick addresses in the volume(s) that already exist
15:17 dbruhn ahh ok
15:17 semiosis need to do replace brick commit force iirc, that first link explains it
15:17 dbruhn glusterbot is going to ban itself for flooding
15:17 semiosis hahaha
15:18 semiosis actually i'm not even sure you can do replace brick commit force when the "two" servers are really the same
15:18 semiosis never tried this myself
15:19 semiosis i think another option is to stop all the gluster processes cluster-wide and edit the volfiles on all the servers to replace the hostname
15:19 semiosis that might be a better approach, if you can afford the downtime
15:20 semiosis afk
15:23 bennyturns joined #gluster
15:28 calum_ joined #gluster
15:31 spiekey gluster peer attach <new server>  --> attach is not a valid command
15:38 shylesh joined #gluster
15:38 rossi_ joined #gluster
15:39 shubhendu joined #gluster
15:43 recidive joined #gluster
15:43 adam_ft joined #gluster
15:44 jmarley joined #gluster
15:47 dbruhn_ joined #gluster
15:48 Ownage joined #gluster
15:49 edong23_ joined #gluster
15:50 adam_ft Hey, does anyone know whether it is possible to setup geo-replication from a Gluster node to a Gluster mount on another server? I've managed to do it where the slave is a standard directory on a remote server, but not where the slave is a Gluster mount... Just getting the "faulty" status.
15:51 Ownage What kind of replication does gluster support? I'm trying to figure out if it's just copies based or if there's something more advanced
15:51 ulimit_ joined #gluster
15:52 recidive_ joined #gluster
15:52 solid_li1 joined #gluster
15:52 dbruhn_ Ownage, there are two types of replication volume and geo-replication. They are both file based, but work differently
15:52 dbruhn_ which one are you interested in?
15:54 Ownage I'm trying to find a way to allow for node failure without having full copies
15:55 Ownage in other words if I have 500T total storage requirements, I'd like to not need to buy 1PB
15:55 Ownage and at the same time allow for at least 1 node to die
15:55 tg2 joined #gluster
15:55 stickybol joined #gluster
15:56 stickyboy joined #gluster
15:56 codex__ joined #gluster
15:56 simon__ joined #gluster
15:57 bugs_ joined #gluster
15:57 Ownage is that something gluster can do? I'm trying to avoid doubling the storage cost if possible
15:59 Ownage if an alternative is nfs+drbd and two server stacks, the tolerence of that is 50% basically, but it's limited to that horizontal arrangement. So I'm thinking about gluster because it can scale horizontally, but I'm not sure how fault tolerence scales with it. I mean it's nice if we can scale 500TB to 5 servers and they all need 500TB of space and you're left with 500T usable but that's not what I'm looking to do
15:59 chirino joined #gluster
15:59 samppah joined #gluster
16:00 Ownage I'm trying to find something which can allow for example 6 servers that each have 100TB and you're left with 500TB and fault tolerence of one node
16:00 diegows joined #gluster
16:02 dbruhn_ Ownage, glusters replication is a multiple so if you wanted 500TB of usable storage, you would need 1000TB of writable file system for it.
16:02 Ownage ok, that doesn't help me in this particular case, thanks
16:02 dbruhn_ np
16:02 baoboa joined #gluster
16:02 Ownage are you aware of any other solution which can do this kind of thing without purchasing vendor hardware?
16:03 XpineX joined #gluster
16:03 wgao joined #gluster
16:03 adam_ft Ownage, you might be interested in http://www.gluster.org/community/documentation/index.php/Features/disperse which is being worked on... Not sure if it's what you're after?
16:03 glusterbot Title: Features/disperse - GlusterDocumentation (at www.gluster.org)
16:03 dbruhn_ Honestly I am not super familiar with some of the other technologies, the problem that most of them have is that they end up with a meta data server or something like it, which can become a single point of failure
16:04 edward1 joined #gluster
16:04 dbruhn_ adam_ft, neat! when is that going to be introduced into production.
16:04 ndk` joined #gluster
16:04 Ownage I'm not interested in something that's not in production but I'll check it out at a later date on a different project when it's production ready, thanks
16:05 Ownage metadata servers can be replicated I'm guessing
16:05 adam_ft dbruhn_ no idea, but it seems to be being actively worked on so will have to keep an eye on it
16:05 dbruhn_ Another competing project is Ceph, I am not sure if it has the features you are looking for, but you can start looking
16:07 primechu_ joined #gluster
16:12 hchiramm_ joined #gluster
16:14 Nev___ joined #gluster
16:15 vpshastry joined #gluster
16:17 gmcwhistler joined #gluster
16:20 hagarth joined #gluster
16:23 _dist joined #gluster
16:26 gmcwhistler joined #gluster
16:32 codex joined #gluster
16:37 micu joined #gluster
16:39 jbrooks joined #gluster
16:43 theron joined #gluster
16:45 lalatenduM joined #gluster
16:47 hybrid512 joined #gluster
16:51 adam_ft left #gluster
16:55 vpshastry left #gluster
16:58 theron joined #gluster
17:01 theron_ joined #gluster
17:01 XpineX joined #gluster
17:03 sprachgenerator joined #gluster
17:11 davinder2 joined #gluster
17:14 cp0k joined #gluster
17:15 rpowell joined #gluster
17:19 spiekey left #gluster
17:20 theron joined #gluster
17:20 theron joined #gluster
17:22 xymox joined #gluster
17:27 ctria joined #gluster
17:34 tryggvil joined #gluster
17:36 Mo_ joined #gluster
17:40 recidive joined #gluster
17:42 lalatenduM joined #gluster
17:44 ngoswami joined #gluster
17:45 daMaestro joined #gluster
17:49 Matthaeus joined #gluster
17:49 zerick joined #gluster
17:54 Matthaeus Problem with openvz on gluster number 1:  weird permissions errors when trying to do normal tasks, like upgrading perl.
17:56 Matthaeus Problem with openvz on gluster number 2:  performance.  Oh my dear god, performance.
17:59 thiagodasilva left #gluster
18:00 tdasilva joined #gluster
18:03 allig8r joined #gluster
18:04 rfortier joined #gluster
18:06 l0uis joined #gluster
18:27 Feil joined #gluster
18:27 Feil hi. anyone testing/working on gluster-infiniband+rdma/ipoib?
18:28 dbruhn_ Feil, what do you need to know?
18:29 svenwiesner joined #gluster
18:30 lalatenduM Matthaeus, I haven't seen much abt openvz on gluster in gluster-users or gluster-devel, so I think you should send your query to gluster-users. it might get you few replies then irc
18:30 svenwiesner Hej there. Having "Error through RPC layer", information on the net did not help unfortunately. Any ideas? ubuntu server 13.04 using semiosis-ubuntu-glusterfs-3_4-raring.list
18:31 lalatenduM Matthaeus, http://www.gluster.org/interact/mailinglists/
18:31 Matthaeus Thanks, lalatenduM
18:32 lalatenduM Matthaeus, :)
18:34 psyl0n joined #gluster
18:34 rossi_ joined #gluster
18:36 Feil dbruhn_: Well, after some testing rdma mounts via fuse disappeared
18:36 Feil on 3.5 beta3
18:37 Feil as 3.4 complains about using rdma
18:37 Feil i had ping-timeout=2
18:37 Feil reboots or remounts via client didnt help
18:37 dbruhn_ Feil, production or test?
18:38 Feil ssd-rdma.log:[2014-02-12 18:21:52.800965] W [rdma.c:1076:gf_rdma_cm_event_handler] 0-ssd-rdma-client-0: cma event RDMA_CM_EVENT_REJECTED, error 8 (me:10.10.10.3:1023 peer:10.10.10.1:49156)
18:38 Feil dbruhn_: test of course, im testing ipoib vs. rdma on gluster
18:38 Feil rdma seems flaky
18:38 dbruhn_ go back to 3.3.2 it should work
18:39 Feil but 3.3.2 is buggy too
18:39 dbruhn_ I don't believe anything 3.4.x and up has functioning RDMA
18:39 Feil with features
18:39 dbruhn_ Feil, I am running RDMA on 3.3.2 in production very successfully
18:41 Feil ok, are you missing any features that 3.4.x provides?
18:41 Feil what kind of volumes?
18:41 Feil replicated?
18:41 Feil did you notice a performance benefit of rdma compared to ipoib?
18:42 dbruhn_ Mine are distributed+replicated. I would like the brick quorum features, but you need to have a replica 3 for it to work, and I can't afford the extra disk
18:42 dbruhn_ I honestly went straight to RDMA from 1GBe
18:42 dbruhn_ I would be interested in seeing a performance test between the two if you are putting that info together
18:43 dbruhn_ since RDMA support always lags behind the TCP/IP support
18:43 Feil dbruhn_: give me your email.
18:43 dbruhn_ deanbruhn@gmail.com
18:44 Feil ill send you something when im done.
18:44 Feil which IB are you running btw?
18:44 Feil qdr, ddr?
18:44 dbruhn_ I have QDR in my production, and DDR in my test systems
18:45 dbruhn_ my test system are in shambles right now though
18:46 jmarley joined #gluster
18:46 jmarley joined #gluster
18:49 Feil dbruhn_: what kind of volumes? what kind of files?
18:50 dbruhn_ All of my volumes are dist+replica x2 on xfs
18:50 jmarley joined #gluster
18:50 dbruhn_ My files are a mix, I have about 40 million files to every 20TB of disk usage. Some large, some small, some in between.
18:51 tryggvil joined #gluster
18:52 jmarley joined #gluster
18:52 jmarley joined #gluster
18:53 Feil but archival or active use?
18:54 theron joined #gluster
18:54 dbruhn_ Active use, my usage patterns are about 99% random read, 1% write.
18:54 dbruhn_ I am doing header reads on all of the files in the system at all times
18:55 Feil what kind of storage enclosures are you using?
18:55 nikk anyone have a minute to help me out with a noobish problem that i can't seem to figure out? :]
18:55 asku joined #gluster
18:57 semiosis svenwiesner: can you please put the log with that error up on pastie.org so we can have a look?
18:57 dbruhn_ Feil, I have three systems, two are dell R720xd's, one is a 6 server system running 2TB sata drives, one is a 12 server system running 600GB 15k SAS, and the third is a 6 server system running 3TB 7200 sata
18:57 dbruhn_ nikk, what is the issue
18:58 nikk dbruhn_: i currently have a replicated volume with two nodes, one brick per node.  i want to add two more hosts (one brick per host) - i'm executing "gluster volume add-brick gv0 rhel3:/data/gv0/brick1 rhel4:/data/gv0/brick1" however it returns "volume add-brick: failed:" which doesn't tell me a whole lot :)
18:58 Feil dbruhn_: this is your company's?
18:58 nikk i tried adding "replica 2" after "add-brick" with the same result
18:58 dbruhn_ Feil, yep
18:59 dbruhn_ nikk, have you checked the logs?
18:59 dbruhn_ and did you add the servers to the peer group first?
18:59 Feil dbruhn_: mellanox adapters?
18:59 nikk dbruhn_: yeah, i "peer probe rhel3" (and 4)
18:59 semiosis svenwiesner: please put the glusterd log on pastie.org, that is usually /var/log/glusterfs/etc-glusterfs-glusterd.log
18:59 dbruhn_ Feil, mellanox on switch and card side. connectx-3 I believe
18:59 nikk dbruhn_: any particular log?  i didn't see anything too helpful
19:01 dbruhn_ nikk, anything in the glusterhsd.log
19:01 dbruhn_ also are iptables or SELinux running?
19:01 nikk neither
19:02 nikk there's no glusterhsd.log in /var/log or /var/log/glusterfs
19:02 nikk on any of the four servers
19:02 dbruhn_ nikk, what do you see from "gluster peer status"
19:02 cp0k nikk: I believe you need to specify replica 2 in the add-brick command
19:02 nikk on all four i'm connected to three others
19:02 nikk State: Peer in Cluster (Connected)
19:02 nikk cp0k: tried it, same result
19:03 cp0k nikk: and the volume name you are specifying is correct?
19:03 cp0k gv0
19:03 nikk haha yeah, it is :)
19:03 cp0k sorry if that was a silly question, just making sure
19:04 Feil dbruhn_: what kind of througput are you getting? qdr link speed 40 gbits actually gives 26 gbits over iperf.
19:04 nikk gluster> volume add-brick gv0 replica 2 rhel3:/data/gv0/brick1 rhel4:/data/gv0/brick1
19:04 nikk volume add-brick: failed:
19:04 nikk cp0k: of course :)
19:05 nikk it's just weird that it's not outputting an actual error
19:05 dbruhn_ Feil, honestly I have never done throughput testing, my bigger driver was lowering latency
19:06 cp0k nikk: I would bump up the level logging in gluster and then tail /var/log/gluster/*
19:07 cp0k nikk: perhaps you have the log level set to warning or something minimal, causing you to not see anything in the logs pertaining to the problem
19:07 nikk k sec
19:07 nikk diagnostics.brick-log-level ?
19:08 nikk or client as well
19:09 spiekey_ joined #gluster
19:10 spiekey_ Hi
19:10 glusterbot spiekey_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:12 spiekey_ can someone tell me how to read this? http://fpaste.org/76655/13922323/
19:12 glusterbot Title: #76655 Fedora Project Pastebin (at fpaste.org)
19:12 spiekey_ do i have a split brain?
19:13 cp0k nikk: correct, I would bump up both brick-log-level and client
19:13 cp0k nikk: once you identify the problem, you can bump it back down
19:13 nikk k sec, warning or debug?
19:16 svenwiesner Dear semiosis: http://pastebin.com/2n6r9SnT
19:16 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
19:16 svenwiesner sry for pastebin, will use alternatives next time
19:16 svenwiesner (did not know)
19:18 Ownage I don't see ads on pastebin. If you don't like ads why don't you just link an adblock utility, kill several birds with one stone
19:18 svenwiesner don't wanna be rude but I think you are talking to a bot
19:19 svenwiesner Pasted error concerns an rhel 6.5 machine connecting to an ubuntu 13.04, both running glusterfs 3.4.2
19:20 cp0k nikk: set to debug
19:21 svenwiesner okok, I guess I found the error
19:22 svenwiesner "Request received fromnon-privileged port"
19:26 Ownage I'm talking to whoever wrote the bot
19:26 Ownage I see the same whiney complaining spam message in several channels
19:28 nikk cp0k: on one of the new servers i see this - [2014-02-12 14:27:11.202236] E [glusterd-op-sm.c:3719:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Add brick', Status : -1
19:29 cp0k nikk: hmm, that doesen't really tell me much :/
19:29 cp0k s/me/us/g
19:29 nikk that's all there really is at that time
19:29 nikk yeah haha
19:29 spiekey_ anyone?  http://fpaste.org/76655/13922323/ any idea why my replication is not sync? :)
19:29 glusterbot Title: #76655 Fedora Project Pastebin (at fpaste.org)
19:30 cp0k nikk: another silly question: you have glusterd running on all nodes and the versions all match up?
19:30 nikk glusterd and glusterfsd, yes
19:31 nikk peer status shows that the other three nodes are connected (on each host)
19:31 semiosis svenwiesner: make sure you have the same version of glusterfs installed on all servers (and clients)
19:31 semiosis svenwiesner: i'm going to lunch, bbiab
19:32 svenwiesner thank you, see you
19:32 semiosis svenwiesner: oh also check iptables ,,(ports) and disable selinux
19:32 glusterbot svenwiesner: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
19:32 super_du_ joined #gluster
19:32 svenwiesner alright, thank you I will look into it
19:33 Jugernaut joined #gluster
19:33 cp0k nikk: do you have any special options set on your gluster volume? can you paste the output of 'gluster volume info' for us on http://fpaste.org/
19:33 glusterbot Title: New paste Fedora Project Pastebin (at fpaste.org)
19:36 nikk http://ur1.ca/gm8a4
19:36 glusterbot Title: #76665 Fedora Project Pastebin (at ur1.ca)
19:38 BACONATOR joined #gluster
19:39 KyleG joined #gluster
19:39 KyleG joined #gluster
19:41 svenwiesner Regarding issue "Error through RPC layer" on Ubuntu. Adding "option rpc-auth-allow-insecure on" to /etc/glusterfs/glusterd.vol solved it for me. Don't forget to bring your Firewall. Good night and thanks to semiosis...
19:41 svenwiesner exit
19:41 svenwiesner left #gluster
19:49 nikk cp0k: have a chance to look at that?
19:52 cp0k nikk: looking now
19:52 edward2 joined #gluster
19:55 spiekey_ if i have "Number of entries: 0" then i have no files on my gluster volume, correct?
19:58 cp0k nikk: everything appears to check out just fine, I would check the little silly things like ensuring that /data/gv0/brick1 dir exists on the new storage nodes
19:58 cp0k nikk: that your /etc/glusterfs/glusterd.vol file matches, etc..
20:00 nikk the dir exists, let me check that file
20:01 cp0k nikk: k, make sure you are not firewalling the gluster ports as well, again all little silly things :)
20:01 spiekey_ am i muted? :P
20:01 nikk all there is is "volume management" and it's identical on all four
20:02 cp0k nikk: okay, sounds like everything checks out...in your place I would try turning it off and back on again heh
20:03 cp0k nikk: if the boxes are not in production that is
20:04 cp0k nikk: be back in a bit, need to get some food in me, will check back on ya when I get back
20:04 nikk mmk
20:04 StarBeast joined #gluster
20:05 spiekey_ here is more info about my non-replication gluster http://fpaste.org/76681/92235496/ :-)
20:05 glusterbot Title: #76681 Fedora Project Pastebin (at fpaste.org)
20:05 spiekey_ i offer a candy!
20:06 rpowell1 joined #gluster
20:12 semiosis @later tell svenwiesner that is not normal for glusterfs, on Ubuntu or any other distro.
20:12 glusterbot semiosis: The operation succeeded.
20:12 semiosis @later tell svenwiesner re: <svenwiesner> Regarding issue "Error through RPC layer" on Ubuntu. Adding "option rpc-auth-allow-insecure on" to /etc/glusterfs/glusterd.vol solved it for me.
20:12 glusterbot semiosis: The operation succeeded.
20:13 spiekey_ glusterbot: can you help me? ;)
20:15 Elico joined #gluster
20:17 ulimit semiosis: interesting, just wanted to ask about that. I thought that server.allow-insecure on would have been sufficient.
20:21 recidive joined #gluster
20:21 semiosis ulimit: allow-insecure is not normally needed to use glusterfs (regardless of distro)
20:22 semiosis ulimit: however, in those (rare) cases where it is needed, you need to do two things... allow it on the volumes (using set command) and allow it in the glusterds, by editing its volfile
20:23 semiosis but again, this is for special things most people will never need
20:23 glusterbot spiekey_ can anyone? ;)
20:24 spiekey_ ;)
20:25 bennyturns joined #gluster
20:25 bennyturns ork
20:25 semiosis spiekey_: are you writing data directly into the bricks?  you should not do that -- all access should go through a client mount point
20:25 spiekey_ oh
20:25 semiosis spiekey_: please pastie your client log file
20:26 spiekey_ yes, i wrote directly to my brick directory
20:26 semiosis well then there's your problem
20:26 semiosis dont do that
20:26 _dist :)
20:26 spiekey_ that was easy!
20:26 semiosis ,,(next)
20:26 glusterbot Another satisfied customer... NEXT!
20:27 semiosis purpleidea: ^^
20:27 purpleidea semiosis: haha yeah!
20:28 sputnik13net joined #gluster
20:29 partner hello
20:29 glusterbot partner: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
20:29 partner sorry bot, just came to say hello and state i'm still alive :)
20:29 JoeJulian Howdy pardner
20:30 partner no worries, i didn't come with any issues this time. ohwell there's one but just waiting for chance to upgrade to 3.4 to get new challenges :)
20:30 JoeJulian heh
20:32 partner we're upgrading the storage clients to wheezy and renewing that infra, once done i'll do the upgrade from 3.3.2
20:32 JoeJulian I wish I had the hardware to duplicate my servers. Seems like a strong test environment would be to capture all my fuse transactions and play them back over the version I want to test.
20:32 KyleG joined #gluster
20:32 KyleG joined #gluster
20:33 tryggvil joined #gluster
20:33 partner i wish i could ship our hw for you to help with your efforts but that would cost quite a bit :o
20:33 JoeJulian :)
20:34 partner nothing new of course, sorry, but i have rack full of servers for which i find no good use
20:34 partner i guess i'll just "put them into cloud"
20:35 JoeJulian where are these servers?
20:35 * JoeJulian has more uses than hardware...
20:35 KyleG A full rack with no good use? Where are you, IT Heaven?! lol
20:35 nikk virtualization is your friend
20:35 nikk not a good friend, but a friend none the less
20:36 dbruhn_ I have plenty of hardware, but had to pull my stuff out for other stuff, so now I am missing power
20:36 nikk that's what she said
20:36 dbruhn_ lol
20:36 dbruhn_ I just wish I had time to work on servers, I've spent too much time hating business life.
20:37 JoeJulian Hrm... It looks like partner's extra hardware is in https://www.youtube.com/watch?v=cmBlUb2dcsk
20:37 glusterbot Title: Spamalot - Finland (FULL) - YouTube (at www.youtube.com)
20:37 partner hoho
20:37 partner what the heck..
20:37 JoeJulian You'd not seen that before?
20:38 partner it seems i haven't :)
20:38 JoeJulian Spamalot's playing here in Seattle right now, but I saw it in Las Vegas.
20:39 partner i faintly recall that "i said england, not finland"
20:39 JoeJulian Pretty accurate depiction of Finland though? ;)
20:40 partner sure, now the rest of the channel knows aswell how we are :D
20:40 JoeJulian lol
20:43 partner on the topic, are there any bugs around currently for 3.4 series stable that would affect any "normal" operations, rebalancing or adding bricks or anything such?
20:43 partner topic as of #channel..
20:44 partner i managed to bump two rebalance bugs with the 3.3-series so trying to be a bit more cautious here..
20:44 JoeJulian 3.4.2 has every critical (in my own opinion) bug that I'm aware of quashed.
20:45 partner 3.3.2 has nasty memory leak which prevents rebalancing for more than few days with 8 gigs on the server, luckily the limits bite and new stuff gets stored to only bricks having enough space
20:45 partner by limit i mean the min-free-space i've set for the volume
20:47 partner my testing environment is way too small nor active enough to spot such issues
20:47 ulimit semiosis: thanks. we are working on Bareos (backup software) and trying to use libgfapi, the daemon in charge of storing backups does not run as root, that's why. I think that disallowing unprivileged source ports does not really make it much more secure
20:47 semiosis ah yes, you have one of those unusual uses that needs allow-insecure :)
20:48 semiosis and you're right, "secure ports" have no bearing on actual security
20:53 _dist semiosis: I've noticed while using proxmox that it doesn't set the allow-insecure but libgfapi still works, never put it together myself where that was the case
20:53 semiosis _dist: if you run an app that uses libgfapi as root, then it doesnt need allow-insecure
20:54 semiosis idk how proxmox runs things
20:54 _dist semiosis: got it, that makes perfect sense. Proxmox runs KVM as root
20:55 andreask joined #gluster
21:20 REdOG joined #gluster
21:21 Elico joined #gluster
21:21 REdOG anyone know how to convert a gluster fuse image to libgfapi image?
21:22 REdOG can I just dd it?
21:22 _dist REdOG: in my experience you can just run it on fuse or libgfapi
21:23 andreask it is always a file on a glusterfs volume
21:23 REdOG hmm ok
21:23 andreask I'd also simply try to use it
21:23 REdOG yea I thought thats what I did
21:23 REdOG my qemu config must be borked
21:23 REdOG tks
21:24 _dist REdOG: when I test it, I jus use cli to make sure things are working
21:28 cp0k nikk: I am back, how are you doing with adding the new bricks?
21:28 sputnik13net if my storage nodes have 30TB each, and I want to store a file that is larger than 30TB, do I need to use striping?
21:28 KyleG a file larger than 30TB? whoa.
21:29 REdOG does libvirt work with libgfapi?
21:29 tryggvil joined #gluster
21:29 * REdOG is thinking of trying that
21:29 KyleG I talked to the guys who do storage for south park studios, and an entire episode of theirs takes 1 TB raw, I can't imagine what a 30TB file would contain….lol.
21:29 Matthaeus joined #gluster
21:29 _dist REdOG: sort of, yes if you edit the XML, libvirt 1.2.x introduced support for libgfapi as stores, and that works but there is currently no libvirt GUI  for libgfapi (other than say ovirt)
21:29 sputnik13net KyleG: block volumes for VMs that store hundreds of TB :)
21:30 KyleG ah kewl
21:30 REdOG _dist: tks
21:30 _dist REdOG: honestly I just finished switching to proxmox 3.1 for that very reason, but that's a big switch (it doesn't use libvirt)
21:30 sputnik13net so...  anyone?
21:30 sputnik13net from what I can understand so far, striping seems the only way to get files stored larger than a brick
21:31 _dist over fuse? single 30TB file?
21:31 REdOG _dist: do you know where I can find an example config xml?
21:31 nikk cp0k: sorry been distracted with something else, i had no luck though, firewall and selinux off, tried rebooting, there's no error other than the one i pasted.. just returned error code -1
21:31 nikk i deleted and started a new volume, which worked, i'm going to remove two bricks and see if i can re-add them
21:32 _dist REdOG: no not for certain, something like this: https://www.redhat.com/archives/libvirt-users/2013-July/msg00056.html (I remember it took me a while to figure it out last time I was doing it, but it's not hard once you get it)
21:32 glusterbot Title: Re: [libvirt-users] Libvirt and Glusterfs (at www.redhat.com)
21:32 cp0k nikk: sounds good...by deleteing the volume and recreating it, was Gluster able to still read the data?
21:32 cp0k nikk: or this is data that you dont care about?
21:33 _dist REdOG: this too http://www.google.ca/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;ved=0CCYQFjAA&amp;url=http%3A%2F%2Fwww.gluster.org%2Fcommunity%2Fdocumentation%2Fimages%2F9%2F9d%2FQEMU_GlusterFS.pdf&amp;ei=Luj7UtSXJOGh2gWzrIHQBw&amp;usg=AFQjCNHMraOpxQtpGPLb4SsGRsSonskDPg also that author has a video you can follow where he does it step by step
21:34 JoeJulian sputnik13net: That is correct. That is the purpose of ,,(stripe)
21:34 glusterbot sputnik13net: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
21:34 REdOG _dist: tks
21:35 cp0k anyone here using commodity hardware for their gluster setup? or mostly proprietary metal like Dell, HP, etc...
21:35 Matthaeus supermicro white boxes.
21:35 cp0k I hated supermicro, till Gluster came along :)
21:35 Matthaeus They had the benefit of being free.
21:35 _dist supermicro also
21:36 _dist I'm also uses a home style pc setup for testing on asrock
21:36 _dist using* sorry this chat only gets 20% attention at best :)
21:37 plarsen joined #gluster
21:39 cp0k Matthaeus: how are you getting your supermicros for free? ;)
21:39 ctria joined #gluster
21:40 JoeJulian Asus
21:42 Matthaeus cp0k: I live in the Land of Defunct Startups.
21:43 REdOG I get an error in glusterd-vol.log 'Request received from non-privileged port. Failing request'
21:43 sputnik13net when setting up replication with gluster, is there a way to specify a group of bricks as one replica set and another group as the other replica set?
21:43 REdOG libvirt tells me Transport endpoint is not connected
21:43 cp0k Matthaeus: good for you!
21:43 cp0k :)
21:43 _dist REdOG you need allow-insecure turned on
21:43 sputnik13net we have storage servers split up between multiple racks, I'd ideally like each set to be on different racks
21:43 REdOG _dist: I thought I did
21:43 REdOG server.allow-insecure: on
21:43 REdOG in info
21:44 _dist REdOG: iirc restart of volume is required
21:44 REdOG ah
21:44 JoeJulian I wonder if that bug has been filed.
21:44 _dist REdOG: also I suggest https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/html/Quick_Start_Guide/chap-Quick_Start_Guide-Virtual_Preparation.html
21:44 glusterbot Title: Chapter 3. Managing Virtual Machine Images on Red Hat Storage Servers (at access.redhat.com)
21:44 _dist JoeJulian: I'm not certain on the restart
21:45 REdOG yea same error
21:45 cp0k Is a stop / restart of the volume required when adding new bircks? I forget
21:45 _dist REdOG: no it's not a simple volume option, it's in the glusterd.vol
21:45 REdOG hmm
21:46 _dist http://www.ovirt.org/Features/GlusterFS_Storage_Domain
21:46 glusterbot Title: Features/GlusterFS Storage Domain (at www.ovirt.org)
21:46 _dist "option rpc-auth-allow-insecure on ==> in glusterd.vol (ensure u restart glusterd service... for this to take effect) "
21:46 _dist I suggest you read the whole important pre-reqs part
21:47 REdOG in progress
21:48 nikk cp0k: ok so i can reproduce it.. one sec
21:50 nikk cp0k: http://ur1.ca/gm914
21:50 glusterbot Title: #76716 Fedora Project Pastebin (at ur1.ca)
21:50 ktosiek joined #gluster
21:50 nikk sad panda
21:52 REdOG _dist: bingo :)
21:52 JoeJulian nikk: That seems slightly expected, though I would have thought you'd at least get the error.
21:53 nikk yeah..
21:53 JoeJulian ~path or prefix | nikk
21:53 glusterbot nikk: http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
21:53 JoeJulian nikk: what version is that?
21:54 nikk glusterfs-server.x86_64           3.4.2-1.el7                    @glusterfs-epel
21:54 JoeJulian nikk: also ,,(hostnames)
21:54 glusterbot nikk: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
21:54 JoeJulian You missed the last step there.
21:55 nikk yeah i noticed the ip in there
21:55 cp0k nikk: what version of Gluster are you running here?
21:55 JoeJulian cp0k: ^^^
21:55 cp0k just noticed, sorry about that
21:55 JoeJulian hehe
21:56 cp0k well now I am a bit concerned, I just upgraded to 3.4.2 and am also going to be adding new servers / bricks in
21:56 nikk JoeJulian: ok that fixed the hostname problem
21:56 cp0k I really hope I don't run into the same issue
21:57 nikk i'd try to reproduce your setup in VMs first
21:58 JoeJulian The problem is that he removed bricks and re-added the same ones. The pre-existing bricks have extended attributes that mark them as having been part of a volume. GlusterFS tries to protect you from adding bricks that were part of a volume already to an existing volume. Adding pre-populated bricks to an existing volume can be fraught with potential problems.
21:58 nikk well
21:58 nikk first i tried just re-adding
21:58 JoeJulian The real bug, though, is that it didn't print the actual error message.
21:58 nikk didn't work, i removed /gluster/gv0 on the file system of the two nodes that were removed
21:58 cp0k agreed
21:58 nikk still didn't work
21:59 JoeJulian /var/log/glusterfs/etc-glusterfs-glusterd.vol.log please
21:59 cp0k yes, nikk was trying to add these new bricks to the existing volume, they were not pre-populated bricks
21:59 nikk JoeJulian: on which host, the one removing or the one being removed?
22:00 JoeJulian Let's start with the one on which you issued that command.
22:00 nikk k, one doing the removing - http://ur1.ca/gm92t
22:00 glusterbot Title: #76725 Fedora Project Pastebin (at ur1.ca)
22:01 nikk granted this includes a bunch of earlier crap
22:01 haomaiwang joined #gluster
22:01 nikk so only look at the 16:47 stuff
22:01 nikk now here is one of the nodes being removed - http://ur1.ca/gm92w
22:01 glusterbot Title: #76726 Fedora Project Pastebin (at ur1.ca)
22:01 JoeJulian line 672
22:02 nikk yeah
22:02 nikk i used force
22:02 nikk this is a test machine, not production
22:02 JoeJulian not according to your paste.
22:03 nikk i did it without force, it complained, then i did it with force
22:06 social cp0k: if you have non root permissions set on root of your gluster volume remember that add-brick command will break them and you'll have to reset it
22:07 nikk drwxr-xr-x    2 root root    6 Feb 12 11:56 gluster
22:08 social to be more exact https://bugzilla.redhat.com/show_bug.cgi?id=1063832 << this
22:08 glusterbot Bug 1063832: high, unspecified, ---, nsathyan, NEW , add-brick command seriously breaks permissions on volume root
22:08 nikk it's been that the whole time, was one of the early things i checked :)
22:09 nikk i don't know if that's the same issue
22:12 nikk i need to get up, i'll be back later tonight or tomorrow, appreciate the help, let me know if you have any other ideas
22:12 nikk otherwise i'll bother you guys again tomorrow :]
22:12 failshell joined #gluster
22:12 nikk another thing to note, actually, this is on rhel7 beta but i can't imagine that matters
22:13 bennyturns joined #gluster
22:13 zerick joined #gluster
22:14 cp0k social: I am not 100% following
22:14 cp0k http://https//bugzilla.redhat.com/show_bug.cgi?id=1063832 does not load for me
22:14 glusterbot Bug 1063832: high, unspecified, ---, nsathyan, NEW , add-brick command seriously breaks permissions on volume root
22:15 plarsen joined #gluster
22:16 social meheh broken links here, just look up bug 1063832 in redhat bugzilla if you are adding bricks
22:16 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1063832 high, unspecified, ---, nsathyan, NEW , add-brick command seriously breaks permissions on volume root
22:17 cp0k social: got it, thanks...seems this bug was just reported yesterday
22:18 cp0k my gluster root directory is currently set to root:root so I guess I am fine :)
22:41 a2 joined #gluster
22:44 psyl0n joined #gluster
23:01 ira joined #gluster
23:03 tryggvil joined #gluster
23:15 gdubreui joined #gluster
23:16 eshy joined #gluster
23:25 wralej i'm planning to try btrfs for bricks.. to do so, do i need to use 3.5's latest beta, or is there a stable version with support for this?
23:26 JoeJulian Any should work. I think 3.5 is supposed to give you snapshots though...
23:26 wralej JoeJulian: excellent.. thank you
23:27 Matthaeus Heh.  With a little ingenuity and some insanity you can roll your own snapshots with gluster and lvm.
23:28 wralej I'm hoping to benefit from copy-on-write and in-line compression.
23:28 wralej Though, I'm guessing that compression won't work very well for VM volumes...
23:29 Matthaeus Are they sparse images?
23:29 * JoeJulian always hearkens back to 20Mb hard drives when people talk about filesystem compression...
23:30 wralej Matthaeus: yes
23:30 divbell my first hard drive was a hand-me-down 10MB MFM with 6MB of bad sectors
23:30 JoeJulian :)
23:30 wralej nice :)
23:31 wralej I think my first hard drive was like 800MB.  we were late to the game, because floppies worked so well on the amiga 500.. or so we thought at the time.. :)
23:31 wralej lha and lzh.. :)
23:32 wralej thanks for the help.. gotta run.. afk
23:42 _dist left #gluster
23:49 jporterfield joined #gluster
23:59 smellis _dist: how'd your deployment go?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary