Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 a2_ nfs has been changed to use 2049 since 3.4
00:03 JoeJulian @factoids change ports 1 "s/.$/and 2049 since 3.4./"
00:03 glusterbot JoeJulian: The operation succeeded.
00:03 JoeJulian @ports
00:03 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111and 2049 since 3.4.
00:03 JoeJulian @factoids change ports 1 "s/111and/111 and/"
00:03 glusterbot JoeJulian: The operation succeeded.
00:04 MugginsM so the only way to make 3.4 behave like 3.3 in terms of ports is a recompile?
00:04 JoeJulian Unless a2_ says differently
00:06 a2_ can't think of another way
00:06 sprachgenerator joined #gluster
00:09 sprachgenerator joined #gluster
00:10 MugginsM crap.
00:10 * MugginsM tries to figure out how to roll back
00:11 MugginsM worked fine in test, so I rolled it out to staging, which has a firewall :-/
00:11 a2_ :|
00:11 MugginsM on the plus side, we had a staging
00:12 JoeJulian rolling back (if you're using rpms) should be as easy as pointing to the 3.3 repo and "yum downgrade gluster*"
00:13 MugginsM yeah, it's ubuntu
00:13 MugginsM will figure it out
00:13 MugginsM just verifying it's the problem
00:13 MugginsM yeah, it is, machines on one side of the firewall are fine
00:13 JoeJulian MugginsM: File a bug report. This could be considered a compatibility issue and might make it into 3.4.1 or 2
00:13 * MugginsM nods. even a warning in the upgrade guide would help :)
00:14 JoeJulian I noticed the release notes weren't linked to the wiki. I've fixed that.
00:15 JoeJulian ... because everyone reads all the release notes before upgrading, right?
00:15 MugginsM I did, but I didn't know we had an extra firewall :)
00:15 MugginsM I wasn't involved with the original deploy
00:16 JoeJulian a2_: Still have to document it, but it looks like a gfid mismatch in 3.4.0 causes a client (and possibly even a brick) hang.
00:18 a2_ so there's a bug which is *causing* a gfid mismatch, and there's a bug which makes the client hang when there's a gfid mismatch ? :|
00:18 JoeJulian I'm pretty sure the mismatch was split-brain related.
00:18 a2_ what splitty thing did you do?
00:19 JoeJulian I'll get a repro written up...
00:19 JoeJulian I had a server hang when it rebooted... and I did a little bit of punting.
00:19 JoeJulian So the split-brain I take full credit for. :D
00:20 JoeJulian The hang, however, was unexpected since I thought we had that handled in 3.0.
00:42 MugginsM ok putting the port back to 24009 is working
00:42 MugginsM phew
00:44 JoeJulian MugginsM: you could always set up a gre tunnel through that firewall. :D
00:45 MugginsM heh
00:58 yinyin joined #gluster
01:11 SteveWatt joined #gluster
01:18 kevein joined #gluster
01:32 sprachgenerator joined #gluster
01:35 robo joined #gluster
01:38 jporterfield joined #gluster
01:43 jporterfield joined #gluster
01:52 SteveWatt joined #gluster
02:15 glusterbot New news from newglusterbugs: [Bug 986775] file snapshotting support <http://goo.gl/ozgmO>
02:21 bala joined #gluster
02:27 jporterfield joined #gluster
02:31 asias joined #gluster
02:41 bharata joined #gluster
03:09 jporterfield joined #gluster
03:19 jporterfield joined #gluster
03:29 shubhendu joined #gluster
03:41 ppai joined #gluster
03:43 itisravi joined #gluster
03:44 awheeler joined #gluster
03:45 rastar joined #gluster
03:56 jporterfield joined #gluster
03:57 f0reAlz joined #gluster
03:58 RameshN joined #gluster
04:02 f0reAlz any
04:03 f0reAlz anyone seen problems with "no space on device" when there are plenty o space availiable ?
04:03 dusmant joined #gluster
04:03 f0reAlz I have two distributed bricks and they are 1.5 TB each
04:04 mkasa joined #gluster
04:04 f0reAlz and have about 260 G free and now although I am deleting off of them it seems like I am just creeping back on what the availiable to use
04:05 f0reAlz Nothing that is standing out in the logs and of course checked the df on the whole fs to make sure there is sufficinet space availiable
04:05 f0reAlz any thoughts anyone ? Would appriciate feedback
04:05 f0reAlz TIa
04:05 johnsonetti joined #gluster
04:06 sgowda joined #gluster
04:08 SteveWatt joined #gluster
04:09 mkasa Hi.
04:09 glusterbot mkasa: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
04:12 mkasa My volume is distributed & replicated (#replica: 2), and has split-brain errors.
04:13 f0reAlz hi
04:13 glusterbot f0reAlz: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
04:13 mkasa I read http://www.joejulian.name/blog/fix​ing-split-brain-with-glusterfs-33/ and fixed all the split-brain files, but 'gluster volume heal volumename info split-brain' still complains that some directories are in a error state.
04:13 glusterbot <http://goo.gl/FzjC6> (at www.joejulian.name)
04:13 nightwalk joined #gluster
04:14 SteveWatt left #gluster
04:14 mkasa I am pretty sure that all files are consistent (the sizes are all the same, binary diff reported no descrepancies) across the nodes.
04:15 mkasa So, what should I do to clear the errors?
04:15 shylesh joined #gluster
04:20 mkasa atime/ctime/mtime maybe different between the nodes, but does it matter?
04:21 aravindavk joined #gluster
04:22 hagarth joined #gluster
04:23 sahina joined #gluster
04:26 harish_ joined #gluster
04:27 kanagaraj joined #gluster
04:36 raghu joined #gluster
04:37 itisravi_ joined #gluster
04:37 ndarshan joined #gluster
04:38 itisravi_ joined #gluster
04:42 Humble joined #gluster
04:45 spandit joined #gluster
04:48 DV joined #gluster
04:52 lalatenduM joined #gluster
04:53 mohankumar joined #gluster
04:53 lalatenduM joined #gluster
04:58 vpshastry joined #gluster
05:05 deepakcs joined #gluster
05:05 rjoseph joined #gluster
05:07 anands joined #gluster
05:08 psharma joined #gluster
05:23 shruti joined #gluster
05:23 bulde joined #gluster
05:24 Guest53741 joined #gluster
05:25 sgowda joined #gluster
05:27 chirino joined #gluster
05:28 rastar joined #gluster
05:35 ndarshan joined #gluster
05:39 chirino joined #gluster
05:41 jporterfield joined #gluster
05:44 ajha joined #gluster
05:49 chirino joined #gluster
05:52 raghug joined #gluster
05:53 nshaikh joined #gluster
05:59 chirino joined #gluster
06:00 sgowda joined #gluster
06:08 chirino joined #gluster
06:08 rgustafs joined #gluster
06:08 aravindavk joined #gluster
06:13 lalatenduM joined #gluster
06:16 chirino joined #gluster
06:22 jtux joined #gluster
06:24 chirino joined #gluster
06:29 anands joined #gluster
06:30 JoeJulian mkasa: Did you notice the gluster volume heal $vol info split-brain entries all have timestamps? It's more like a log and doesn't necessarily represent current split-brain status.
06:34 vshankar joined #gluster
06:37 chirino joined #gluster
06:37 rastar_ joined #gluster
06:41 vimal joined #gluster
06:42 mooperd joined #gluster
06:42 psharma joined #gluster
06:46 romero joined #gluster
06:50 chirino joined #gluster
06:58 chirino joined #gluster
06:59 mkasa Thanks for the reply. I know that 'gluster volume heal $vol info split-brain' shows a log, not the current status. The split-brain directories I mentioned are periodically (every 10 min.) added to the log, so I thought it was not resolved.
07:00 eseyman joined #gluster
07:01 mkasa The version I use is 3.4.0-8 (on CentOS6.4, 64bit). The volume has been being rebalanced (still in progress), so it might be related also.
07:08 hybrid5121 joined #gluster
07:09 mkasa It seems I found a wordaround. Create a new temporary directory, move all files in the problematic directory into the new temporary directory, delete the (empty) problematic directory, rename the temporary directory.
07:10 ricky-ticky joined #gluster
07:10 ngoswami joined #gluster
07:10 chirino joined #gluster
07:10 mkasa Now I do not see split-brain directories reporeted periodically in the log (what 'gluster volume heal $vol info split-brain' shows).
07:13 mkasa I kept some (problematic) directories for debugging, so I can still check something if necessary.
07:19 vpshastry1 joined #gluster
07:22 chirino joined #gluster
07:24 eseyman joined #gluster
07:24 andreask joined #gluster
07:29 mooperd joined #gluster
07:37 chirino joined #gluster
07:46 aravindavk joined #gluster
07:48 chirino joined #gluster
07:58 chirino joined #gluster
08:07 mooperd joined #gluster
08:10 satheesh joined #gluster
08:17 mgebbe_ joined #gluster
08:26 chirino joined #gluster
08:35 spider_fingers joined #gluster
08:36 chirino joined #gluster
08:40 niximor joined #gluster
08:40 niximor hi there
08:40 niximor i tried googling, but could not figure out how to properly set up a replication
08:41 niximor what I need is: one volume, with one storage, replicated over multiple servers for redundancy
08:41 niximor and I need to be able to dynamically add new servers in case when one server fails
08:41 niximor can this be achieved using glusterfs?
08:48 wica_ joined #gluster
08:50 wica_ Hi, Hi
08:51 chirino joined #gluster
08:52 wica_ joined #gluster
08:52 |miska| left #gluster
08:55 ujjain joined #gluster
08:55 ujjain joined #gluster
08:56 wica_ Hi, we have glusterfs 3.3.2 on Ubuntu. Now we have a disk/brick failure. So we have to replace it. Is there somewehere some work instructions about how to do it?
08:56 wica_ Or can I turn off the server and replace the disk?
08:57 social wica_: replicate setup?
08:58 wica_ social: Yep
08:58 raghug joined #gluster
08:59 chirino joined #gluster
09:00 social wica_: the disk holds data or it's jut part of raid? If you are replicated it should be quite safe to take down one node
09:01 wica_ social: NBo, the disk is not part of a raid. disk = brick.  It is save to bring down a node. I did it once.
09:02 wica_ So I can bring down the server (or use hotswap) replace the disk, turn it on and rebalancing it all.
09:02 wica_ This is correct?
09:03 social should be, we usually just copy configuration and power it up but that's on amazon with loosing whole node :)
09:04 wica_ :) this is my server :) Thanks, go to do it this night.
09:04 wica_ Ps. I have re
09:04 wica_ nevermind
09:13 chirino joined #gluster
09:24 chirino joined #gluster
09:25 Norky joined #gluster
09:34 chirino joined #gluster
09:37 rastar joined #gluster
09:37 jporterfield joined #gluster
09:44 mooperd joined #gluster
09:46 chirino joined #gluster
09:47 jporterfield joined #gluster
09:48 jebba joined #gluster
09:57 chirino joined #gluster
10:07 mooperd joined #gluster
10:08 chirino joined #gluster
10:16 aravindavk joined #gluster
10:27 psharma joined #gluster
10:27 chirino joined #gluster
10:35 jporterfield joined #gluster
10:36 chirino joined #gluster
10:37 vpshastry1 joined #gluster
10:40 jebba joined #gluster
10:45 hagarth hmm, gluster.org seems to be down
10:47 chirino joined #gluster
10:50 hagarth @channelstats
10:50 glusterbot hagarth: On #gluster there have been 173795 messages, containing 7354450 characters, 1228556 words, 4915 smileys, and 656 frowns; 1076 of those messages were ACTIONs. There have been 66569 joins, 2068 parts, 64489 quits, 21 kicks, 165 mode changes, and 7 topic changes. There are currently 220 users and the channel has peaked at 226 users.
10:53 duerF joined #gluster
11:00 chirino joined #gluster
11:01 NuxRo hagarth: got hackernews-ed
11:01 NuxRo https://news.ycombinator.com/
11:01 glusterbot Title: Hacker News (at news.ycombinator.com)
11:01 NuxRo i mean, https://news.ycombinator.com/item?id=6262347
11:02 glusterbot Title: How far the once mighty SourceForge has fallen… | Hacker News (at news.ycombinator.com)
11:03 NuxRo and how also gluster.org has fallen :)
11:04 NuxRo anyone running wordpress or drupal these days should put varnish or squid in front of it
11:08 NuxRo it's all jclift's fault
11:09 chirino joined #gluster
11:17 chirino joined #gluster
11:18 andreask joined #gluster
11:19 ppai joined #gluster
11:19 davinder joined #gluster
11:20 davinder i have deployed the gluster setup with replicated volume
11:21 davinder how I have one question ...if I mount the first gluster server on client then if that server down how client will auto mount on other gluster server
11:21 lpabon joined #gluster
11:21 davinder i have to use any tool like heartbeat or LVS for that ????
11:24 purpleidea glusterbot: gluster.org is down it seems
11:24 purpleidea http://www.gluster.org/
11:25 purpleidea avati: not sure who should be notified. maybe you know?
11:25 purpleidea message on page is: Error establishing a database connection
11:25 davinder intermittierend issue
11:25 mambru joined #gluster
11:26 davinder how client will auto change gluster volume to other replica system ???
11:27 Peanut Did gluster.org DDOS itself by posting something on Reddit about SourceForge?
11:30 andreask davinder: native gluster client knows all servers and if it detects one server is down it read/writes only from/to the remaining replicas
11:31 NuxRo Peanut: they're at least on hacker news :)
11:31 NuxRo davinder: it's done automatically by the gluster client
11:31 NuxRo you dont have to do anything
11:37 chirino joined #gluster
11:37 Humble joined #gluster
11:45 ppai joined #gluster
11:48 chirino joined #gluster
11:48 rgustafs joined #gluster
11:52 dusmant joined #gluster
11:56 chirino joined #gluster
12:04 chirino joined #gluster
12:15 chirino joined #gluster
12:19 johnmark Peanut: oh that explains it. rebooting the server
12:22 jtux joined #gluster
12:30 rcheleguini joined #gluster
12:31 chirino joined #gluster
12:31 LoudNoises joined #gluster
12:40 awheeler joined #gluster
12:41 awheeler joined #gluster
12:41 Slashman joined #gluster
12:43 chirino joined #gluster
12:43 Slashman hello, does anyone have an idea about when the official website http://gluster.org will be back ?
12:44 johnmark Slashman: yes, as soon as we stop getting DDOS'd by hacker news :)
12:44 johnmark I'm trying to shut down all HTTP processes as we speak
12:45 Slashman johnmark: I see, good luck
12:56 chirino joined #gluster
12:57 bennyturns joined #gluster
13:02 nshaikh joined #gluster
13:03 dusmant joined #gluster
13:05 chirino joined #gluster
13:08 robo joined #gluster
13:14 Excolo joined #gluster
13:15 johnmark ok, sites back. and installed a caching plugin
13:15 Excolo Ok, hopefully this is my last question in here for a while. I had to re-do our gluster installation, and I have a co-worker pissed off at me, because in my rush, I did the volume on just /export and not /export/somedir. Is there a way, while the volume is active (preferabbly, I can take it back offline if needed) to change that (i have a 2 server replicated setup)
13:18 hagarth joined #gluster
13:19 jclift joined #gluster
13:27 chirino joined #gluster
13:33 Norky joined #gluster
13:36 chirino joined #gluster
13:37 duerF joined #gluster
13:41 ninkotech__ joined #gluster
13:45 chirino joined #gluster
13:50 rcheleguini joined #gluster
13:53 chirino joined #gluster
13:59 spider_fingers left #gluster
14:02 bugs_ joined #gluster
14:08 chirino joined #gluster
14:09 Norky joined #gluster
14:12 Norky joined #gluster
14:14 failshell joined #gluster
14:18 chirino joined #gluster
14:18 B21956 joined #gluster
14:19 sgowda joined #gluster
14:21 B21956 left #gluster
14:23 vpshastry1 left #gluster
14:24 kaptk2 joined #gluster
14:25 f0reAlz joined #gluster
14:25 plarsen joined #gluster
14:25 chirino joined #gluster
14:26 jtux joined #gluster
14:26 f0reAlz \h
14:27 Norky joined #gluster
14:27 Technicool joined #gluster
14:28 sprachgenerator joined #gluster
14:28 zombiejebus joined #gluster
14:37 chirino joined #gluster
14:40 lpabon joined #gluster
14:40 sahina joined #gluster
14:41 Norky joined #gluster
14:42 zombiejebus joined #gluster
14:43 sprachgenerator joined #gluster
14:43 johnmark @channelstats
14:43 glusterbot johnmark: On #gluster there have been 173896 messages, containing 7356775 characters, 1228942 words, 4918 smileys, and 656 frowns; 1076 of those messages were ACTIONs. There have been 66635 joins, 2071 parts, 64566 quits, 21 kicks, 165 mode changes, and 7 topic changes. There are currently 206 users and the channel has peaked at 226 users.
14:44 bennyturns joined #gluster
14:45 zombiejebus joined #gluster
14:46 anands joined #gluster
14:47 * JoeJulian chuckles at gluster.org being unable to scale...
14:49 JoeJulian @later tell Excolo Man, you should stick around longer. You're always gone when I go through scrollback to answer questions.
14:49 glusterbot JoeJulian: The operation succeeded.
14:49 LoudNoises joined #gluster
14:52 eseyman joined #gluster
14:53 jurrien joined #gluster
14:54 jtux joined #gluster
14:57 andreask joined #gluster
14:57 daMaestro joined #gluster
14:58 chirino joined #gluster
15:02 bulde joined #gluster
15:09 JoeJulian johnmark: There... took it from a little over a 1.0 load, to 0.02.
15:09 chirino joined #gluster
15:09 JoeJulian ... and there's still too much free ram...
15:10 JoeJulian There's a bunch of mysql tuning that can be done, too, but I'll wait for some idle time.
15:10 jclift gluster.org now offline.  Is this what you're meaning?
15:11 JoeJulian wtf
15:11 JoeJulian maybe I won't wait then!
15:12 jclift Heh, I guess that's not what you were meaning. ;)
15:12 JoeJulian mysql wasn't running???
15:12 jclift I have no idea.
15:12 jclift I'm just looking at this from an end-user-trying-to-use-the-website PoV
15:12 JoeJulian johnmark: what are you doing???
15:13 johnmark JoeJulian: restarting services - because database was unavailable
15:13 johnmark JoeJulian: what are you doing?
15:13 JoeJulian I haven't touched mysql
15:14 johnmark JoeJulian: I keep getting "error establishing a database connection" and I can't figure out why
15:14 JoeJulian but I saw that services were stopped so...
15:14 johnmark JoeJulian: services are back up, but still getting the error
15:14 johnmark grr
15:14 jclift Note to self: We should put gluster.org on the cloud
15:15 johnmark JoeJulian: noticed memcached is running. I think we need to disable for now until it's back up
15:15 johnmark JoeJulian: it's a wordpress issue. hang on
15:16 jclift Since this is probably a transient high load thing, should we go and get a bigger host online somewhere for a day or two, loading the wordpress stuff into that?
15:17 jclift [question for later I guess, after the site is back up]
15:17 johnmark JoeJulian: great.... "wp_options: Table is marked as crashed
15:17 johnmark ok, how do you repair mysql tables
15:17 JoeJulian I'll fix it
15:18 glusterbot New news from newglusterbugs: [Bug 988946] 'systemd stop glusterd' doesn't stop all started gluster daemons <http://goo.gl/ZWKpy2>
15:18 rcheleguini joined #gluster
15:18 johnmark JoeJulian: ok
15:21 johnmark hooray
15:25 JoeJulian Pay no attention to the man behind the curtain...
15:25 chirino joined #gluster
15:26 johnmark ha
15:33 chirino joined #gluster
15:35 semiosis JoeJulian: http://www.youtube.com/watch?v=xhy7dXWjpAA
15:35 glusterbot Title: Fry Fix it !! - YouTube (at www.youtube.com)
15:36 robo joined #gluster
15:36 JoeJulian hehe
15:42 chirino joined #gluster
15:45 Peanut Doing a full recovery test was very usefull - I've lost one of my Ubuntu KVM cluster nodes yesterday due to a kernel bug that corrupted the root filesystem, but I managed to keep all my guests alive and now everything's restored and redundant again, jay. Almost weekend!
15:45 JoeJulian nice
15:47 johnmark woohoo
15:52 cicero Node Rebalanced-files          size       scanned      failures         status
15:52 cicero glusterfs5-int  139851097007712       127.2TB       2139337        151887      completed
15:52 cicero pretty sweet
15:52 cicero too bad the volume is only 2.5TB
15:54 aliguori joined #gluster
15:59 \_pol joined #gluster
15:59 chirino joined #gluster
16:00 \_pol Is root squash enable-able on 3.3?
16:00 \_pol And if so, how do you turn it on?
16:01 JoeJulian no
16:02 _pol JoeJulian: is it only a 3.4 feature?
16:03 zerick joined #gluster
16:03 JoeJulian Well, you could do it in 2.0, but not very reliably.
16:04 _pol JoeJulian: I'm confused, does that mean it is or isn't an available feature in 3.3/3.4?
16:05 JoeJulian Sorry, 3.4
16:06 bulde joined #gluster
16:12 chirino joined #gluster
16:20 rcheleguini joined #gluster
16:20 chirino joined #gluster
16:28 chirino joined #gluster
16:31 JoeJulian semiosis: You've got to read this one! http://permalink.gmane.org/gmane.co​mp.file-systems.gluster.user/12764
16:31 glusterbot <http://goo.gl/Otfvu0> (at permalink.gmane.org)
16:31 robo joined #gluster
16:32 vpshastry joined #gluster
16:40 chirino joined #gluster
16:46 vincent_vdk joined #gluster
16:47 Humble joined #gluster
16:48 chirino joined #gluster
16:48 mohankumar joined #gluster
16:50 nshaikh left #gluster
16:51 Mo_ joined #gluster
16:57 chirino joined #gluster
17:02 sahina joined #gluster
17:02 syntheti_ joined #gluster
17:15 bulde joined #gluster
17:21 vpshastry joined #gluster
17:21 vpshastry left #gluster
17:39 [o__o] left #gluster
17:41 [o__o] joined #gluster
17:42 [o__o] left #gluster
17:43 [o__o] joined #gluster
17:45 _pol joined #gluster
17:48 jclift joined #gluster
18:01 robo joined #gluster
18:28 bennyturns joined #gluster
18:30 lpabon joined #gluster
18:33 bet_ joined #gluster
18:36 dmueller left #gluster
18:59 robo joined #gluster
19:19 jporterfield joined #gluster
19:25 jporterfield joined #gluster
19:28 thanosme joined #gluster
19:35 lpabon joined #gluster
19:43 hamnstar joined #gluster
19:49 hamnstar replicated volumes over a 26mS latency link for VM hosting... ridiculous?
19:52 hamnstar better yet, 50Mbps link
19:54 jporterfield joined #gluster
20:07 JoeJulian Depends on your use case... <shrug>
20:09 hamnstar I figured as much.  I'm aiming to make a highly available XenServer pool, I'm really on the fence if it's a crazy idea or not
20:10 hamnstar trying to find out how gluster and xen work under the hood
20:11 JoeJulian sounds a little crazy to me...
20:11 JoeJulian xen doesn't (yet) have any direct support for gluster volumes.
20:11 hamnstar yeah... but it's just so tantalizing that I want to believe....
20:12 hamnstar well, in a roundabout way xen can use gluster's built in NFS
20:13 hamnstar not ideal but possible
20:13 JoeJulian Oh, sure, possible. Try it out and see if it works for you.
20:13 JoeJulian Then blog about it. :D
20:13 hamnstar hahaha yeah ive definately come across yours on the topic
20:22 a2_ can't you use xen through qemu?
20:23 JoeJulian hehe, I think so! :D
20:23 JoeJulian But I thought it was just kvm that had support
20:23 a2_ so, why won't the gluster block driver?
20:23 JoeJulian Now I've got to try it out...
20:24 a2_ bah, the block layer and the hypervisor abstraction in qemu are miles apart.. i have no reason to believe the block layer is even aware of what hypervisor is being used
20:24 a2_ i would be surprised if it does not work.. there is *nothing* kvm specific in the gluster block driver
20:24 JoeJulian cool
20:25 a2_ in fact, i'm now curious why we even say "kvm integration" in our website and blogs etc.
20:25 lpabon but i think you mean that the block driver can be accessed through FUSE for XEN.  Right?
20:26 lpabon and QEMU accesses the block driver through gfapi?
20:26 JoeJulian Because nobody knew that...
20:26 a2_ qemu access the gluster through gfapi.. and when you use xen through qemu, i think it uses the same block driver, no?
20:27 a2_ JoeJulian, i may be wrong.. i am making assumption of how qemu works (a good guess actually)
20:27 lpabon it depends if XEN uses qemu block io interface, but i do not know
20:27 lpabon if not, then it must go through whatever IO method it uses over FUSE
20:27 a2_ FUSE surely works
20:28 a2_ can't you use qcow2 with xen then?
20:28 lpabon i'm not sure if that is how XEN works.. I'm not sure if they use QEMU as their IO
20:28 lpabon i'm almost sure they do not, but I am not 100%
20:29 a2_ ok.. i may be wrong then. i assumed qemu's block IO was agnostic to the hypervisor technology (it is very generic after all)
20:29 lpabon true, we just need to confirm what the IO stack for XEN is
20:30 lpabon brb, asking in #qemu in OFTC
20:30 a2_ aliguori is right here.. not sure if he's active now though
20:31 lpabon quote from #qemu:  lpabon: I think they have some in kernel block stuff
20:31 lpabon http://blog.xen.org/index.php/2011/05/​16/daniel-castros-gsoc-project-add-xen​-pv-block-device-support-to-seabios/
20:31 glusterbot <http://goo.gl/4LWXhk> (at blog.xen.org)
20:33 lpabon i'm not sure why they gave me that link, that seems to deal with the VM itself, imo
20:33 a2_ aha
20:34 a2_ lpabon, i think that is basic underpinnings for getting block io into qemu?
20:34 lpabon i think i'm more confused now that before :-)
20:34 _pol in CentOS6.4 I get an "unknown option _netdev" when I try to use the gluster native client mount.
20:34 _pol Is that... expected?
20:34 lpabon maybe i'll just get the code and answer the question that way
20:35 a2_ that was a 2011 project
20:35 a2_ any idea if it was completed?
20:35 lpabon no, not yet
20:36 JoeJulian _pol: Only if you haven't upgraded
20:36 _pol JoeJulian: haven't upgraded to what?
20:37 JoeJulian _pol: to a current version. That extraneous message has been silenced.
20:38 _pol JoeJulian: I am using glusterfs 3.3.1, so you are saying if I upgrade to LATEST then that message will go away?
20:38 lpabon a2_: i mean, no i do not know yet
20:43 JoeJulian _pol: Rats... That patch didn't get in release-3.3. So it's only silenced in 3.4.0. Regardless, that mount option is an init option, not really a mount option. So it's technically a correct statement, but it can be safely ignored.
20:47 JerryM joined #gluster
21:06 aliguori a2_, xen can use qemu's block layer but that's not typical, the typical case is to use blkfront and blkback which skips qemu and is implemented in the kernel
21:06 aliguori it's possible to use qemu event with blkfront but i don't think that's common at all
21:06 aliguori even*
21:10 a2_ aliguori, hmm ok.. thanks for clarifying!
21:16 zerick joined #gluster
22:10 zerick joined #gluster
22:21 bala joined #gluster
22:23 robo joined #gluster
22:25 awheele__ joined #gluster
23:01 awheeler joined #gluster
23:02 badone_ joined #gluster
23:57 thanosme left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary