Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 _pol joined #gluster
00:33 asias joined #gluster
00:34 _zerick_ joined #gluster
00:57 glusterbot New news from newglusterbugs: [Bug 958781] KVM guest I/O errors with xfs backed gluster volumes <http://goo.gl/4Goa9>
00:59 harish_ joined #gluster
01:14 mtanner_ joined #gluster
01:19 the-me joined #gluster
03:05 bharata-rao joined #gluster
03:20 kshlm joined #gluster
03:20 maxburk left #gluster
03:21 dusmant joined #gluster
03:26 shubhendu joined #gluster
03:49 itisravi joined #gluster
04:02 mohankumar joined #gluster
04:02 kanagaraj joined #gluster
04:03 RameshN joined #gluster
04:14 spandit joined #gluster
04:32 ndarshan joined #gluster
04:39 MrNaviPacho joined #gluster
04:41 hagarth joined #gluster
04:48 dusmant joined #gluster
04:51 ppai joined #gluster
04:53 spandit joined #gluster
04:55 kanagaraj joined #gluster
04:55 shubhendu joined #gluster
04:56 ndarshan joined #gluster
04:57 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
04:58 ajha joined #gluster
04:59 RameshN joined #gluster
05:14 \_pol joined #gluster
05:23 ababu joined #gluster
05:24 nshaikh joined #gluster
05:28 psharma joined #gluster
05:30 aravindavk joined #gluster
05:30 lalatenduM joined #gluster
05:36 shilpa_ joined #gluster
05:37 raghu joined #gluster
05:38 RameshN joined #gluster
05:38 shruti joined #gluster
05:39 bala joined #gluster
05:58 glusterbot New news from newglusterbugs: [Bug 990028] enable gfid to path conversion <http://goo.gl/1HwiQc>
06:09 saurabh joined #gluster
06:15 CheRi joined #gluster
06:15 rastar joined #gluster
06:17 DV joined #gluster
06:18 ricky-ticky joined #gluster
06:21 shylesh joined #gluster
06:28 msvbhat joined #gluster
06:30 ajha joined #gluster
06:33 vimal joined #gluster
06:35 mohankumar joined #gluster
06:39 ngoswami joined #gluster
06:42 rgustafs joined #gluster
06:42 shyam joined #gluster
06:48 spandit joined #gluster
06:53 rastar joined #gluster
06:55 satheesh1 joined #gluster
06:56 shubhendu joined #gluster
07:22 ngoswami joined #gluster
07:25 jtux joined #gluster
07:27 meghanam joined #gluster
07:27 meghanam_ joined #gluster
07:37 ajha joined #gluster
07:38 DV joined #gluster
07:38 ekuric joined #gluster
07:38 an joined #gluster
07:42 aravindavk joined #gluster
07:44 ababu joined #gluster
07:44 dusmant joined #gluster
07:45 bala joined #gluster
07:51 tziOm joined #gluster
07:58 ctria joined #gluster
08:02 hngkr_ joined #gluster
08:03 dneary joined #gluster
08:05 eseyman joined #gluster
08:05 franc joined #gluster
08:05 rastar joined #gluster
08:10 ndarshan joined #gluster
08:14 kanagaraj joined #gluster
08:15 keytab joined #gluster
08:15 RameshN joined #gluster
08:18 dusmant joined #gluster
08:26 ababu joined #gluster
08:26 bala joined #gluster
08:26 ppai joined #gluster
08:34 mgebbe_ joined #gluster
08:38 RedShift2 joined #gluster
08:39 franc joined #gluster
08:39 franc joined #gluster
08:41 TDJACR joined #gluster
08:42 tjikkun_work joined #gluster
08:50 andreask joined #gluster
08:53 jtux joined #gluster
08:57 TDJACR joined #gluster
08:58 spandit joined #gluster
09:22 shanks joined #gluster
09:26 X3NQ joined #gluster
09:27 Norky joined #gluster
09:33 bulde joined #gluster
09:40 calum_ joined #gluster
09:41 vshankar joined #gluster
10:26 kkeithley1 joined #gluster
10:33 kkeithley_ (,,repo)
10:33 kkeithley_ @repo
10:33 glusterbot kkeithley_: I do not know about 'repo', but I do know about these similar topics: 'git repo', 'ppa repo', 'repos', 'repository', 'yum repo'
10:33 kkeithley_ (,,yum repo)
10:34 kkeithley_ @yum repo
10:34 glusterbot kkeithley_: The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://goo.gl/42wTd5 The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
10:35 kkeithley_ @update
10:37 bala joined #gluster
10:43 RameshN joined #gluster
10:44 msciciel joined #gluster
11:02 mohankumar joined #gluster
11:05 kshlm :q
11:09 bala joined #gluster
11:10 rcheleguini joined #gluster
11:24 ppai joined #gluster
11:28 morsik joined #gluster
11:28 morsik hio
11:28 morsik how to mount bd in gluster 3.4?
11:35 ira joined #gluster
11:42 mohankumar__ joined #gluster
11:45 mohankumar morsik: do you mean Block Device xlator?
11:53 gluslog_ joined #gluster
11:53 samppah_ joined #gluster
11:56 ProT-0-TypE joined #gluster
12:05 bulde1 joined #gluster
12:08 asias joined #gluster
12:11 harish_ joined #gluster
12:12 itisravi joined #gluster
12:12 calum_ joined #gluster
12:19 rjoseph joined #gluster
12:22 hagarth joined #gluster
12:24 morsik mohankumar: yes. it's possible to mount in somewhere without using qemu?
12:25 mohankumar morsik: yes, mounting is similar to posix volume mount
12:26 morsik but... how to do that? i couldn't find any docs about that
12:35 diegows_ joined #gluster
12:39 rwheeler joined #gluster
12:41 glusterbot New news from resolvedglusterbugs: [Bug 1005161] rpm: fix "warning: File listed twice: .../glusterd.info" <http://goo.gl/qowoBA>
12:43 asias joined #gluster
12:44 uebera|| joined #gluster
12:45 mohankumar morsik: https://forge.gluster.org/glusterfs-quota/​glusterfs-quota/blobs/e9c583598b8ad58bbda1​5759067ff57eca619e95/doc/features/bd.txt
12:45 glusterbot <http://goo.gl/R2mtVP> (at forge.gluster.org)
12:46 RameshN joined #gluster
12:54 _zerick_ joined #gluster
12:55 rc10 joined #gluster
12:55 rc10 hi, can i know the difference between flush-behind and write-behind ?
12:59 bennyturns joined #gluster
13:00 andreask joined #gluster
13:03 ndarshan joined #gluster
13:14 chirino joined #gluster
13:15 aixsyd joined #gluster
13:19 _ilbot joined #gluster
13:19 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
13:20 ppai joined #gluster
13:25 aixsyd hiya gents
13:25 aixsyd got a quick question for you
13:26 aixsyd I was looking at the video tutorials for Gluster Storage Platform
13:26 aixsyd what happened to it? why has it been discountinued?
13:29 glusterbot New news from newglusterbugs: [Bug 1023974] Moving a directory with content, into a directory where there is no quota left, succeeds <http://goo.gl/0m4qJ1>
13:31 kaptk2 joined #gluster
13:32 hagarth aixsyd: are you looking for a GUI to manage or an integrated ISO?
13:33 aixsyd either/or
13:33 aixsyd It just seemed like the video's installer/Gui was very fast and simple to configure - and now it looks like you've gotta do a base linux distro install then install glusterfs over top
13:33 aixsyd the GUI in the videos looked good, too.
13:34 aixsyd (now everything looks like its CLI)
13:35 aixsyd one would have assumed that the gluster storage platform ISO/USB installer was a mini distro tuned just for storage - but installing something like Ubuntu server, or Debian would install a ton of other un-needed bloatware
13:40 dbruhn joined #gluster
13:42 aixsyd hagarth: i cant even find archived links to the USB/ISO D:
13:42 RicardoSSP joined #gluster
13:45 RedShift2 joined #gluster
13:45 hagarth aixsyd: oVirt provides GUI based management for GlusterFS.. there's also ovirt-node which provides a minimal ISO with glusterfs packages.
13:51 aixsyd can one use oVirt just for a storage server w/ GlusterFS? I'm looking for a 2-node HA NFS-serving storage cluster to use with Proxmox 3.1...
13:52 hagarth aixsyd: yes, that is possible too. You can use oVirt in gluster mode for gluster functionality management.
13:53 aixsyd I've been mucking around with a DRBD + pacemaker + Corosync HA cluster - and its... intense, to say the least
13:53 emitor joined #gluster
13:54 aixsyd And to get NFS exporting working on a system like that - I need fencing devices and a lot of extra hardware that I dont currently have just to test it - and management isnt looking to sign off on new hardware for a test
13:55 aixsyd Though, from what I read - any/all clusters should be using a fencing device. But the way Gluster was looking - that may not be needed. Any truth to that?
13:57 B21956 joined #gluster
14:01 gmcwhistler joined #gluster
14:03 hagarth aixsyd: yes, fencing is not required for HA with glusterfs
14:04 aixsyd WOOOO
14:04 aixsyd so who in their right mind would still pot for DRBD if this is the case?
14:04 aixsyd *opt
14:05 ndk joined #gluster
14:06 Remco Different performance characteristics
14:07 aixsyd in what way?
14:10 Remco From what I can tell drbd has more raw performance
14:10 Remco But it's much harder to scale
14:11 Remco Also gluster hates latency, while drbd can be set not to care for a hot spare
14:11 Remco They're wildly different systems, so you should really test both
14:12 GabrieleV joined #gluster
14:12 _mattf joined #gluster
14:14 wushudoin joined #gluster
14:14 wushudoin joined #gluster
14:15 aixsyd Remco: I did do some extensive testing with DRBD - and ill tell you that Proxmox hated it when a node went down. as in, pvestatd would freak out and say that all nodes in the proxmox cluster lost quroum and werid stuff. it looks like glusterfs will do better seeing that its natively supported in proxmox now..
14:16 bugs_ joined #gluster
14:16 Remco Ah, quorum doesn't work very well with two nodes
14:16 Remco So yes, it breaks fast then
14:17 aixsyd and I assume that glusterfs works differently, and therefor shouldnt break proxmox as DRBD did, then?
14:17 Remco I'd say so
14:17 aixsyd sweeeet.
14:17 Remco If you do NFS exports, it might be a bit harder
14:18 aixsyd is iscsi possible? i'm doubting so..
14:23 Remco Well, there is the fuse mount and nfs, so you can probably write code for it to do iscsi :P
14:25 itisravi joined #gluster
14:25 bulde joined #gluster
14:26 aixsyd oh joy
14:26 dusmant joined #gluster
14:27 kkeithley_ (,,yum repos)
14:27 kkeithley_ @yum repos
14:27 glusterbot kkeithley_: I do not know about 'yum repos', but I do know about these similar topics: 'yum repo'
14:27 kkeithley_ (,,yum repo)
14:27 kkeithley_ @yum repo
14:27 glusterbot kkeithley_: The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://goo.gl/42wTd5 The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
14:30 kkeithley_ ,,(yum repo)
14:30 glusterbot The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://goo.gl/42wTd5 The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
14:31 kkeithley_ new glusterfs-3.4.1-3 RPMs for el5 and el6 are available from http://goo.gl/42wTd5 . glusterfs-3.4.1-3 RPMs for Fedora will be in the Fedora updates-testing repo soon, and in the updates (stable) repo in about a week.
14:31 ndevos \o/
14:31 ericsean joined #gluster
14:32 ericsean left #gluster
14:33 aixsyd does ovirt-node have GlusterFS?
14:34 ndevos kkeithley_: btw, should there not be an email to the epel-announce mailinglist to inform EPEL users that glusterfs has been dropped from EPEL?
14:38 kkeithley_ probably
14:39 rgustafs joined #gluster
14:47 rwheeler joined #gluster
14:51 rjoseph joined #gluster
14:52 manik joined #gluster
14:54 an joined #gluster
14:56 rc10 joined #gluster
15:00 aixsyd does anyone know if ovirt-node has GlusterFS?
15:00 jiffe99 I am running gluster 3.3.1, can I just do the normal upgrade procedure one node at a time to take it to 3.4.1 ?
15:00 daMaestro joined #gluster
15:01 chirino joined #gluster
15:05 failshell joined #gluster
15:06 ira joined #gluster
15:14 gurubert1 joined #gluster
15:14 harish_ joined #gluster
15:14 jbrooks joined #gluster
15:17 gurubert1 hi
15:17 glusterbot gurubert1: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:18 gurubert1 is it recommended to run gluster storage on the same nodes as the kvm virtualization for an integrated computer-storage-cluster?
15:18 Guest54292 joined #gluster
15:22 Skaag joined #gluster
15:22 Skaag Hi guys, I just setup a replicated volume on two nodes, but nothing is replicating. Peer status shows the peer is connected, volume status shows everything is fine... and yet, files do not replicate.
15:27 zaitcev joined #gluster
15:28 semiosis Skaag: are you writing directly to bricks, or writing through a client mount point?
15:29 Skaag through a client mount point
15:29 LoudNoises joined #gluster
15:29 Skaag it's mounted in /mnt/gluster/ on both nodes
15:29 Skaag actually /mnt/glusterfs/
15:29 Skaag glusterfs#xgnt-vps-001:/ghome on /mnt/glusterfs type fuse (rw,default_permissions,al​low_other,max_read=131072)
15:29 semiosis check client log file.  chances are the client is not connected to one of the bricks.  if you want, pastie.org your client log file as well as 'gluster volume info' output
15:31 Skaag http://pastie.org/8437504 <- volume info
15:31 glusterbot Title: #8437504 - Pastie (at pastie.org)
15:31 Skaag (looks the same when I run it on either nodes)
15:33 Skaag http://pastie.org/8437513 <- gluster volume status
15:33 glusterbot Title: #8437513 - Pastie (at pastie.org)
15:35 Skaag http://pastie.org/8437516 <- log (hope it's the right one)
15:35 glusterbot Title: #8437516 - Pastie (at pastie.org)
15:36 aravindavk joined #gluster
15:36 semiosis thats not a client log file but this line suggests a problem... [2013-10-28 15:27:36.517168] E [client-handshake.c:1741:client_query_portmap_cbk] 0-ghome-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
15:37 semiosis can you telnet from that host to xgnt-vps-001 port 24007?
15:37 chirino joined #gluster
15:42 dbruhn I have a couple bricks that have what look to be incorrect trusted.afr attributes. how do I go about resetting those after I have conformed they are correct
15:43 dbruhn confirmed
15:49 semiosis dbruhn: only time you should manually fix trusted.afr is to resolve split brain afaik
15:50 semiosis use setfattr of course
15:50 semiosis setting the afr attribute to zero marks that copy as "good"
15:50 dbruhn semiosis, that is the exact issue I am having split brain on "/"
15:50 semiosis actually, wait
15:51 semiosis marking zero means not good.  non-zero means it has unsynced changes
15:51 semiosis "good" was not the right term
15:52 dbruhn maybe I can throw this up on fpaste and you can tell me what you think
15:53 semiosis of course you can, though maybe unnecessary
15:53 semiosis i've seen it before...
15:53 semiosis just check to make sure that the brick root dirs have same timestamps, permissions, owners, and contents (just in the brick root, not deeper)
15:53 semiosis once confirmed then you can set the afr xattrs to zero
15:54 dbruhn yeah, that's what I was reading this weekend
15:55 dbruhn what is .landfill?
15:55 semiosis idk
15:56 dbruhn ok, they are confirmed matching
15:56 dbruhn what is the command to set them to zero?
15:56 semiosis see man setfattr
15:57 semiosis afk
15:59 _ilbot joined #gluster
15:59 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
16:01 manik joined #gluster
16:05 dusmant joined #gluster
16:09 hflai joined #gluster
16:10 hagarth joined #gluster
16:11 kshlm joined #gluster
16:15 itisravi joined #gluster
16:22 shilpa_ joined #gluster
16:24 failshell joined #gluster
16:35 dbruhn semiosis, little confused on the usage of the setfattr command does this look right "setfattr -n trusted.afr.ENTV04EP-client-19 -v 0x000000000000000000000000 -h /var/brick19/"
16:36 manik joined #gluster
16:37 Mo__ joined #gluster
16:47 Skaag semiosis, yes, telnet from both vps servers to each other on port 24007 works
16:53 gurubert1 left #gluster
16:57 bulde joined #gluster
16:57 failshe__ joined #gluster
16:58 RameshN joined #gluster
17:00 hflai joined #gluster
17:02 verdurin_ joined #gluster
17:04 ninkotech joined #gluster
17:05 ninkotech_ joined #gluster
17:07 Skaag semiosis, I think maybe it's because the 2 node setup does not meet quorum
17:12 bcdonadio joined #gluster
17:15 bcdonadio If the replica quorum is not met when writing a file in gluster, does the writing succeed and it syncs latter, or the cluster blocks any writing until the replica quorum reaches the configured count?
17:17 failshell joined #gluster
17:21 ninkotech joined #gluster
17:21 ninkotech_ joined #gluster
17:29 ninkotech_ joined #gluster
17:29 ninkotech joined #gluster
17:35 cfeller joined #gluster
17:36 saurabh joined #gluster
17:39 hflai joined #gluster
17:45 aixsyd Gah. I cant seem to figure this out. I installed ovirt - but... where is glusterfs stuff configured?
18:01 B21956 left #gluster
18:06 samppah_ aixsyd: do you want to use ovirt to manage glusterfs or use gluster storage for virtualization?
18:06 hngkr_ joined #gluster
18:06 aixsyd samppah_: I'm looking for a 2-node HA NFS-serving storage cluster to use with Proxmox 3.1...
18:07 aixsyd and i was told ovirt has a gui front end for managaing and setting up glusterfs
18:08 samppah_ aixsyd: oh ok, i think it's possible but i haven
18:08 samppah_ 't tested it
18:08 samppah_ gluster cli is pretty neat and easy to use
18:09 aixsyd think i'd be better served installing something like ubuntu server and just go CLI?
18:09 kbsingh CentOS!!!
18:09 kbsingh you are best off installing CentOS and just go CLI
18:10 kbsingh ofcourse, I am being completely fair and all that, with no specific involved interest at all
18:10 samppah_ :)
18:11 kkeithley_ It's libvirt IIRC that has the gui
18:12 aixsyd hmm.
18:12 aixsyd I've been experimenting with drbd and heartbeat and all that - but without a fencing device - its impossible to test
18:13 pdrakewe_ I'm having some trouble with v3.4.1 on debian.  I have two servers, each with 1 brick and files are replicated.  each server is also a client.  replication works fine when I initially configure it.
18:13 pdrakewe_ then, I disrupt the network between them (iptables DROP packets), touch a file on one, then restore the network.  when I do this, the file is never replicated over to the other server as I would expect.
18:13 JoeJulian damned me... I fixed my puppet scripts so I'm not auto-updating glusterfs-server, but I missed yum-cron. :( Had to fix a bunch of split-brain again.
18:14 pdrakewe_ gluster volume status and gluster peer status confirm that the servers are able to see each other.
18:14 JoeJulian pdrakewe_: Are you touching a file on the brick or through the client?
18:15 pdrakewe_ JoeJulian: through the client
18:15 pdrakewe_ unmounting and remounting causes replication to start working again
18:15 JoeJulian does "gluster volume heal $vol info" show anything?
18:15 pdrakewe_ yes, gluster volume heal VOLNAME info yields a list of the files that need to be healed
18:15 pdrakewe_ trying to force a heal with gluster volume heal VOLNAME outputs "Launching heal operation to perform index self heal on volume VOLNAME has been unsuccessful"
18:16 JoeJulian check /var/log/glusterfs/etc-glusterfs-glusterd.vol.log for why that's unsuccessful.
18:18 dbruhn joejulian, weird question, do you have a version of your split brain script that will cause it to choose the bigger of the two files?
18:18 DataBeaver joined #gluster
18:18 pdrakewe_ the only log entry since issuing that command is "Received heal vol req for volume test-fs-cluster-1"
18:18 pdrakewe_ nothing in the log on the other server since issuing that command
18:19 JoeJulian dbruhn: No. I don't have any use case that would make use of something like that.
18:20 JoeJulian dbruhn: That would also require coordination between servers in identifying which one satisfied your requirements and choosing it.
18:21 daMaestro aixsyd, i'd recommend you not setup ovirt just to configure gluster for proxmox ;-)
18:22 JoeJulian pdrakewe_: I'm sitting in a coffee shop after getting reports of things failing this morning and stopping during my commute to fix all my split-brain. I'm going to head the rest of the way in to the office and pick up from there.
18:22 daMaestro JoeJulian, ouch.
18:23 pdrakewe_ JoeJulian: yikes
18:23 aixsyd daMaestro: i'm setting up ubuntu server now - and using the offical glusterfs tutorial =)
18:23 pdrakewe_ JoeJulian: I appreciate the pointers.  if you think of anything else I should check after arriving at the office, I'll be here.
18:23 JoeJulian yeah... <sigh> my fault, though. I missed excluding gluster from yum-cron and everybody seems to think that blindly restarting glusterfsd is a "good thing" and the "fedora way".
18:24 samppah_ @ubuntu
18:24 samppah_ @ppa
18:24 glusterbot samppah_: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
18:24 samppah_ aixsyd: ^
18:24 JoeJulian ndevos: ^^^
18:24 daMaestro JoeJulian, what package source? i thought we addressed the package upgrade HUP stuff a while ago?
18:26 aixsyd perfect
18:27 eldub joined #gluster
18:28 JoeJulian daMaestro: We did. But  that got broken in a recent rpm, fixed, and is in the process of being broken again per bug 1022542
18:28 glusterbot Bug http://goo.gl/8UhTjA unspecified, unspecified, ---, ndevos, ON_QA , glusterfsd stop command does not stop bricks
18:29 eldub left #gluster
18:30 kr1ss joined #gluster
18:32 Skaag bcdonadio, it looks like everything is written, but nothing is replicated...
18:34 failshell joined #gluster
18:35 elyograg Skaag: you've verified that the servers can reach each other on that port, but what about the *client* ?  gluster replication is done by the client - it writes to all the replicas.
18:35 semiosis dbruhn: 0x000000000000000000000000 is the same as 0x0
18:35 semiosis fyi
18:36 semiosis besides that yeah seems like a reasonable command
18:36 semiosis you can of course test on any other file not in gluster
18:36 semiosis should be able to create/set a trusted xattr on any file & read it back.
18:36 semiosis such an xattr is meaningless outside of a gluster brick
18:37 dbruhn semiosis, thanks I did just that a bit ago and figured it out
18:37 dbruhn thanks for the help
18:37 semiosis yw
18:37 failshell joined #gluster
18:38 Skaag elyograg, what do you mean, I'm mounting from the same machine that is also the brick and the gluster server
18:39 elyograg Skaag: ok, then you have tested from the client.  and I'll be quiet now. :)
18:39 Skaag I think it may be the quorum thing
18:39 Skaag I don't know how to diable the quorum functionality
18:43 elyograg if you've only got two replicas, I think you need to turn quorum off.  i thought it defaulted to off, but I've not been watching what's happening for a while.
18:48 kkeithley_ JoeJulian: the glusterfsd.service ExecStart=/usr/true change started all your glusterfsds anyway?
18:49 Skaag is turning off quorum simply a matter of setting cluster.quorum-type to none?
18:50 _BryanHm_ joined #gluster
18:52 hngkr joined #gluster
18:54 ninkotech_ joined #gluster
18:54 ninkotech joined #gluster
19:12 hateya joined #gluster
19:14 ricky-ticky joined #gluster
19:16 aixsyd hey gluster guys - i just got my first two-node cluster running. I want to know if this is a valid test of gluster - I set them up as two VM's. I have proxmox looking at node 1 for the gv0 volume. If I "stop" aka, kill node 1 - will node two pick up where node 1 left off?
19:17 Remco If you use the native gluster mount, yes
19:17 aixsyd wow... really?
19:17 Remco Writes are sent to each replica at the same time
19:17 aixsyd how does proxmox know about node 2 - the native glusterness? :P
19:17 Remco It's how you mount it
19:18 Remco If you use the fuse mount, it will work like this. With NFS, that's all up to you
19:18 aixsyd i'm used to DRBD and Corosync - where it makes a virtual IP to connect to - this just uses the primary server IP?
19:19 Remco The gluster fuse mount gets the cluster configuration from one node, then connects to all of them
19:20 aixsyd interesting
19:24 aixsyd werid. suddenly, my node 1 is not responding..
19:25 aixsyd i set up a 50gb volume - is there a build time, like mdadm? if so, any way to check its progress?
19:25 aixsyd jesus, the server load is immense. its got 4 cores, and running about 8 load
19:25 Remco What are you doing to it?
19:25 aixsyd not a thing
19:26 aixsyd i mounted it via proxmox, attempted to create a VM, and proxmox says it got a timeout
19:26 aixsyd meanwhile, the load on node 1 is immense
19:26 aixsyd same with load 2
19:26 Remco I set it up once for testing, so I have no idea why it is doing that
19:26 aixsyd er, node 2
19:27 aixsyd its gotta be building something
19:28 aixsyd now it timed out completely on proxmox...
19:28 JoeJulian kkeithley: No, the prior rpm that you already fixed the broken init (el6). It's unrelated to 1022542 but I'm still going to rail against it. :D
19:30 elyograg no, there's no build time.  It's possible (but not generally recommended) to set up a filesystem with existing data as one of your replicas and have gluster automatically create the other replicas.  If you started with empty directories for your bricks, then a stable volume should create pretty fast.
19:30 JoeJulian Skaag, elyograg: quorum is not enabled by default.
19:31 JoeJulian aixsyd: What version of glusterfs are you using?
19:32 aixsyd 3.4.1
19:32 chirino joined #gluster
19:32 aixsyd according to proxmox, both of them are doing some hardcore disk writes
19:32 JoeJulian Well then it's not what I was thinking...
19:32 aixsyd is it just building the volume?
19:33 JoeJulian Must just be proxmox building a 50gb file.
19:33 aixsyd hmmmm
19:33 JoeJulian What is your network speed?
19:33 aixsyd gigabit
19:33 JoeJulian So it should take around 8 or 9 minutes...
19:34 aixsyd its been about.. 15 so far
19:34 Remco Something like 60 MB/s in the best case
19:34 JoeJulian I wonder if it uses incomplete frames...
19:34 aixsyd its showing about 35-40mb/s
19:35 aixsyd so does the gluster client (proxmox) create the useable dataspace? i'd assume it was the server that'd do it.
19:35 JoeJulian If proxmox does something like dd if=/dev/zero of=myimage without setting the blocksize (bs) then it'll take next to forever.
19:35 Remco Two replicas, so 500 Mbit per node, and you are CPU limited it seems, so that's pretty good speed
19:35 aixsyd oh cool.
19:35 aixsyd lol
19:35 elyograg joined #gluster
19:35 aixsyd load has gone down a lot..
19:36 _BryanHm_ joined #gluster
19:36 aixsyd oh i see what happened.
19:36 JoeJulian Remco: One replica's local, so no network time.
19:36 gmcwhistler joined #gluster
19:36 Remco Oh, ok
19:36 JoeJulian yeah....
19:36 aixsyd i made a VM with 32gb hdd. it times out during the creation of it. it took all that time to make it - its showing about 30ish GB used on the glusterfs volume now
19:38 NuxRo joined #gluster
19:39 gGer joined #gluster
19:40 mjrosenb joined #gluster
19:41 pdrakewe_ JoeJulian: thought you might be interested to know...  turns out my build script has a typo so I was building the master branch instead of the v3.4.1 tag.  packages built from master, the self-replication healing doesn't work.  packages built from v3.4.1, it appears to work fine.  ty for your help earlier.
19:42 JoeJulian interesting
19:42 aixsyd well, here goes nothing. I'm gonna attempt to install windows 7 pro onto it, and kill node 1 during the install... >)
19:43 aixsyd that should be okay, yes? >_>
19:43 aixsyd (i know that would break the shit our of DRBD w/o a proper fencing device) XD
19:43 aixsyd *out of
19:46 aixsyd nope. broke it. says storage offline
19:47 aixsyd :'(
19:47 JoeJulian aixsyd: It shouldn't.
19:47 JoeJulian ~pasteinfo | aixsyd
19:47 glusterbot aixsyd: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:47 aixsyd sec
19:48 samppah_ aixsyd: nfs or native glusterfs?
19:48 aixsyd http://fpaste.org/50018/13829896/
19:48 glusterbot Title: #50018 Fedora Project Pastebin (at fpaste.org)
19:48 aixsyd native glusterfs
19:49 JoeJulian Ok, that looks fine. Let's see "gluster volume gv0 status clients"
19:49 aixsyd unrecognized word: gv0 (position 1)
19:49 JoeJulian oops
19:49 JoeJulian Ok, that looks fine. Let's see "gluster volume status gv0 clients"
19:50 aixsyd http://fpaste.org/50019/98981613/
19:50 glusterbot Title: #50019 Fedora Project Pastebin (at fpaste.org)
19:52 JoeJulian gluster1 must be .240 and gluster2 is .189?
19:52 aixsyd yes
19:54 samppah_ network.ping-timeout maybe?
19:54 aixsyd i brought up node 1 and the VM came back
19:55 JoeJulian which machine is your vm host?
19:55 aixsyd neither of thoser
19:55 aixsyd *those
19:55 aixsyd its 10.0.0.202
19:55 JoeJulian hmm, 202 shows that it's connected to both bricks. Strange.
19:55 B21956 joined #gluster
19:56 aixsyd should I try to kill node 2 and see what happens?
19:56 JoeJulian Check your client log on 202: /var/log/glusterfs/{mountpoint with / replaced with -}.log
19:56 JoeJulian Look for the image filename.
19:56 aixsyd found something
19:57 aixsyd http://fpaste.org/50020/29902201/
19:57 glusterbot Title: #50020 Fedora Project Pastebin (at fpaste.org)
19:57 kkeithley_ JoeJulian: what's the deal with your split brain. AFAIK ndevos's change to glusterfsd.service should not have started glusterfsds
19:58 aixsyd JoeJulian: Fuller log: http://fpaste.org/50021/29902781/
19:58 glusterbot Title: #50021 Fedora Project Pastebin (at fpaste.org)
19:58 JoeJulian kkeithley: Don't worry about this one. I'm just trying to point out real-life examples of why I disagree with the fedora policy on this one.
19:59 JoeJulian aixsyd: I think that looks like the volume wasn't mounted when gluster1 was taken down.
20:01 glusterbot New news from newglusterbugs: [Bug 1023667] The Python libgfapi API needs more fops <http://goo.gl/vxi0Zq>
20:01 JoeJulian It's really strange because it never references 0-gv0-client-1, which would be the other brick in the replica.
20:04 aixsyd hmm
20:04 kkeithley_ We're on your side. Nobody has made any noise — as far as I know — since I harshed on the sock puppet on the Fedora devel email list.
20:04 kkeithley_ JoeJulian: ^^^
20:05 aixsyd "Adding a single GlusterFS share to Proxmox 3.1 is one of the easiest things you will do, providing the server is already set up. The trouble comes in when you are using GlusterFS in a redundant/ failover scenario as Proxmox only allows you to enter one GlusterFS server IP meaning that you are loosing all the benefits of having a redundant file system."
20:06 aixsyd REALLY?!
20:07 aixsyd So then what, exactly, is the point of adding GlusterFS support in proxmox if this is the case!?
20:07 Remco Normally you only add one IP for it to fetch the cluster config, so it looks like proxmox is not doing it right
20:08 aixsyd so... what the hell
20:08 kkeithley_ If proxmox is using Gluster native or fuse _and_ the volume is replicated, then you certainly are getting the benefit of having a redundant file system.
20:09 aixsyd but not HA.
20:09 kkeithley_ some people really struggle with understanding how gluster replication works.
20:10 aixsyd looks like the GUI is what sucks - i can add a mount point manually
20:10 kkeithley_ there's not much for proxmox to do right or wrong. If they're using gluster nfs that could be one way to do it incorrectly
20:16 diegows_ joined #gluster
20:16 aixsyd "Using the GlusterFS client you are able to specify multiple GlusterFS servers for a single volume. With the Proxmox web GUI, you can only add one IP address. To use multiple server IP addresses, create a mount point manually."
20:17 LoudNoises joined #gluster
20:28 aixsyd any idea why my 50gb volume is now showing up as 100gb?
20:30 aixsyd this makes no sense.
20:31 aixsyd I mounted the glusterfs volume to a server manually - theres no more 30gb qcow2 disk image, and the 50gb glusterfs volume now shows as 100gb
20:31 aixsyd but the "images" folder is still there
20:32 semiosis aixsyd: you can use ,,(rrdns)
20:32 glusterbot aixsyd: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
20:33 aixsyd Hm..
20:33 aixsyd so just make a bunch of A records?
20:34 semiosis thats usually how it's done
20:34 aixsyd Proxmox wants IPs, however
20:34 semiosis thats insane
20:34 aixsyd i just dont understand how a 50gb volume is showing up as 100gb..
20:35 aixsyd sounds like nothing is mirrored now
20:35 daMaestro joined #gluster
20:37 aixsyd but wow, holy shit. it works.
20:37 aixsyd i killed node 1 as windows 7 VM was installing to the glusterfs cluster and it kept right on going
20:44 aixsyd oh wait a minute. I get it, i think. It was showing 50gb before becasue it only saw one node
20:47 semiosis aixsyd: please ,,(pasteinfo)
20:47 glusterbot aixsyd: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
20:57 samppah_ aixsyd: sorry, i'm not familiar with proxmox.. where does it say it's 100GB?
20:57 ctria joined #gluster
20:57 JoeJulian aixsyd: It must be that one of your bricks is 100GB. Perhaps something's not mounted where it should be on one of your servers?
21:05 Guest54292 joined #gluster
21:25 qubit joined #gluster
21:26 qubit are there any limitations with glusterfs and a large number of files in a single directory? We just hit an issue where all processes accessing a directory appear to have have hung completely while performing a `lookup` call (according to wchan).
21:27 qubit by large number I mean roughly 300,000
21:28 polfilm joined #gluster
21:28 JoeJulian Not limitations, per-se, but definately a latency issue with that. Helped a lot with the new fuse readdirplus. If you're using 3.4.1 and mount using use-readdirp=on (not sure... that might be automatic now if your kernel's fuse supports it), that should mitigate that problem a lot.
21:29 calum_ joined #gluster
21:29 semiosis mainline kernel 3.11 (ubuntu saucy) supports that, and also the newest centos/rhel kernels
21:29 semiosis ...have it backported
21:30 * qubit tracks that down
21:36 qubit and the issue just went away...
21:37 qubit thats the second time this has happened. We've got 8 boxes, all of them with multiple processes, and all of them hang at the same time. Then half hour later they all resume and start working normally :-/
21:42 failshel_ joined #gluster
21:48 B21956 left #gluster
21:52 fidevo joined #gluster
21:53 JoeJulian 30 minutes is the default frame timeout
21:54 qubit what's that?
21:54 * qubit is happy to read a doc if available
21:54 JoeJulian That's how long any single event transaction will take to time out.
21:55 qubit hrm. so question is why are the transactions failing...
21:56 JoeJulian Long story, basically that's what readdirplus was designed to fix.
22:16 Guest54292 joined #gluster
22:30 TP_ joined #gluster
22:32 TP_ Anyone willing to explain the difference between a replicated volume and a distributed replicated volume? :-) They seem pretty much the same from a setup point of view.
22:33 JoeJulian A distributed-replicated volume is what you get when you have more than one single set of replicas. Filenames are distributed among every replica set.
22:35 TP_ Ah.  So I have had 2 replicas , a file would get dumped into both, if i had a replicacount of 2 ?
22:35 JoeJulian right.
22:36 JoeJulian If you have a replica 2 volume and 4 bricks, you'll have a distribute-replica volume. File A may end up on bricks 1 and 2, and file B may be on bricks 3 and 4.
22:37 TP_ ah, so all your files dont end up in the same replicaset .
22:40 JoeJulian Just in case you're heading down the wrong train of thought, replication is mostly about fault tolerance. With a clustered filesystem, you're not so concerned with having files on every server, but rather having /access/ to those files from every client (clients may also be servers).
22:41 JoeJulian http://joejulian.name/blog/glust​erfs-replication-dos-and-donts/
22:41 glusterbot <http://goo.gl/B8xEB> (at joejulian.name)
22:42 TP_ we care more about fault tolerance ;  So I was trying to determine the difference between the two types of replication.
22:42 JoeJulian yay
22:48 TP_ Interesting post. Thanks for the link Joe.  Do you have any good links for the DHT for Gluster? It seems it based on filenames...
22:49 JoeJulian @lucky dht misses are expensive
22:49 glusterbot JoeJulian: http://goo.gl/A3mCk
22:49 JoeJulian I think the start of that article explains it fairly well.
22:53 TP_ excellent .
22:53 TP_ i mean.... Eggscellent  ....
22:54 JoeJulian :)
22:57 failshell joined #gluster
22:58 Skaag JoeJulian, if it's not a quorum issue, I have no idea why else it doesn't replicate
23:00 TP_ Got to throw in my "Regular Show" references when I can.  :-)  One last question for the group... with DHT, if my file names are similar in name, 000.txt 001.txt .... , do I run the risk of having the files located in the same brick?
23:03 JoeJulian Skaag: "Another crawl is in progress for ghome-client-0" which I would imagine is supposed to be a heal crawl. Perhaps that heal crawl is hung for some reason. Is this a live volume?
23:03 Skaag I guess it is
23:04 JoeJulian You always run the chance that any two files will be on the same dht subvolume, but with enough filenames they should end up being fairly evenly distributed.
23:04 JoeJulian But similarity in names should not adversely affect the hashing algorithm.
23:05 JoeJulian Skaag: I mean, it it actively being used, or can you shut it down and start it up again?
23:05 Skaag I can shut it down
23:05 Skaag it contains test data anyway
23:06 JoeJulian I would "volume stop", stop glusterd, make sure "ps ax | grep glustershd" isn't running (kill it if it is), then start glusterd and the volume again.
23:06 Skaag ok doing that now
23:07 JoeJulian It's a shotgun approach, but... ,,(meh)
23:07 glusterbot I'm not happy about it either
23:08 Skaag you're right, on the first server, it stopped and it was all clean, no stuck processes, but on the second server this process wouldn't quit: /usr/sbin/glusterfs --volfile-id=/ghome --volfile-server=xgnt-vps-001 /mnt/glusterfs/
23:08 Skaag I killed it and am restarting
23:08 JoeJulian that one's the client mount.
23:12 TP_ thanks JoeJulian
23:12 JoeJulian You're welcome. :D
23:13 T0aD joined #gluster
23:18 elyograg tonight I add my new storage servers to the cluster and begin a rebalance.  running 3.3.1, any reason I should be worried about problems that aren't my own fault for doing something wrong?
23:19 JoeJulian No*
23:20 elyograg so it should be some peer probes, add-brick operations, and then a volume rebalance.
23:20 JoeJulian yep
23:21 JoeJulian How big is your current data set?
23:21 elyograg we have about 20TB of 40TB used.
23:21 elyograg adding another 40TB usabler.
23:21 JoeJulian That doesn't sound too bad.
23:22 elyograg it's an 8x2 volume.  eight 5tb bricks per server.
23:23 elyograg adding two more identical servers to the mix.
23:23 JoeJulian That reminds me... I need to add a couple drives to my media cluster.
23:24 elyograg we're going to tackle the upgrade to 3.4.1 later.  that's going to be a bit more involved.
23:26 Skaag weird, mount.glusterfs says: Mount failed. Please check the log file for more details
23:26 Skaag but which log file is it referring to?
23:27 elyograg probably /var/log/glusterfs/mnt-*
23:30 Skaag found it
23:30 Skaag it says: 0-glusterfs-fuse: cannot open /dev/fuse (No such device)
23:30 Skaag but I verified and /dev/fuse does exist
23:42 Skaag ok I think I know what's going on
23:42 Skaag vzctl set $VEID --capability sys_admin:on
23:42 Skaag the admin needs to run this on the host to allow my openvz container admin access to the /dev/fuse device
23:45 * JoeJulian grows to hate openvz more and more...

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary