Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 JoeJulian I'm not coming up with any ideas...
00:04 partner please try, i don't want to go with goats... for which i have most letters filled up already.. :)
00:05 partner this really is the hardest part of my job, figuring out hostnames that can be explained for externals..
00:07 JoeJulian Yardstick?
00:08 JoeJulian Yak
00:08 eclectic_ yak isn't exactly a goat but you can milk it.
00:08 JoeJulian lol
00:08 partner "go milk your files" - there's the new slogan
00:08 JoeJulian yoda
00:08 partner thanks!
00:12 rcampbel3 yottabyte
00:15 partner alright, i should be packed up now for a couple of days of trip
00:16 partner thanks for the suggestions, don't spend you day figuring them further, i need to think the naming scheme a bit further..
00:17 rcampbel3 Ubuntu 14.04 repos have Gluster 3.4.2 - I see there's a repo with 3.6.1 - read the update docs... should I move to 3.6.1 before I go production, or just stick with the distro 3.4.2?
00:17 partner 3.6.2 is out aswell
00:18 partner http://blog.gluster.org/2015/01/glusterfs-3-6-2-ga-released/
00:19 partner i'm off anyways, 5 or so hours of sleep before my new boss picks me up.. good $dayoftime
00:28 ildefonso joined #gluster
00:40 PeterA joined #gluster
00:40 PeterA got a strange NFS issue again
00:40 PeterA http://pastie.org/9884804
00:40 PeterA getting setting permissions IO/error on gluster NFS
00:40 PeterA when copying a dir over
00:40 PeterA checked gluster logs and didn't see error
00:42 PeterA noticed the file was able to copy over
00:42 PeterA just wonder why getting IO error
00:43 JoeJulian cp copies the file first, then changes the permissions. Apparently it's failing to do so.
00:43 PeterA but the files ends up got copied over w/ right permissions.....
00:46 PeterA http://pastie.org/9884810
00:46 PeterA stack trace
00:47 PeterA seems like the getxattr failed on gluster NFS
00:48 _Bryan_ joined #gluster
00:49 JoeJulian Actually, removexattr
00:49 PeterA right….what could be the issue?
00:57 victori joined #gluster
01:04 semiosis rcampbel3: 3.4.2 is very old.  please use the ,,(ppa) packages.  i dont know what series to recommend, but even if you go with 3.4 there are newer releases than 3.4.2 available in the PPA
01:04 glusterbot rcampbel3: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
01:05 semiosis i might be able to get 3.6.2 published tonight
01:07 jaank joined #gluster
01:33 RicardoSSP joined #gluster
01:33 RicardoSSP joined #gluster
01:42 gildub joined #gluster
01:45 victori joined #gluster
01:56 side_control joined #gluster
01:59 bharata-rao joined #gluster
02:14 sputnik13 joined #gluster
02:17 rafi joined #gluster
02:20 haomaiwa_ joined #gluster
02:21 plarsen joined #gluster
02:36 victori joined #gluster
02:47 harish joined #gluster
02:49 suman_d joined #gluster
02:57 bharata-rao joined #gluster
03:04 kshlm joined #gluster
03:15 suman_d_ joined #gluster
03:18 suman_d_ joined #gluster
03:37 suman_d joined #gluster
03:44 itisravi joined #gluster
03:47 victori joined #gluster
03:50 atinmu joined #gluster
03:50 suman_d_ joined #gluster
03:51 suman_d_ joined #gluster
03:57 nbalacha joined #gluster
04:13 shubhendu_ joined #gluster
04:14 bala joined #gluster
04:26 Manikandan joined #gluster
04:27 suman_d joined #gluster
04:29 plarsen joined #gluster
04:30 dgandhi joined #gluster
04:35 daMaestro joined #gluster
04:38 anoopcs joined #gluster
04:41 RameshN joined #gluster
04:41 jiffin joined #gluster
04:44 elico joined #gluster
04:59 anil joined #gluster
04:59 gem joined #gluster
05:00 spandit joined #gluster
05:01 soumya_ joined #gluster
05:01 kdhananjay joined #gluster
05:02 aravindavk joined #gluster
05:02 MacWinner joined #gluster
05:03 prasanth_ joined #gluster
05:04 sakshi joined #gluster
05:10 ppai joined #gluster
05:15 shubhendu_ joined #gluster
05:25 ndarshan joined #gluster
05:30 victori joined #gluster
05:33 shylesh__ joined #gluster
05:43 meghanam joined #gluster
05:45 atinmu joined #gluster
05:46 kanagaraj joined #gluster
05:47 ramteid joined #gluster
05:51 partner semiosis: is your process described anywhere or are all the ways of building packages per each individual (kaleb has done couple of debs, no idea of the rpms)?
05:52 partner i guess jenkins and such are beyond the community somewhere inside the rh or so?
05:55 nshaikh joined #gluster
05:55 victori joined #gluster
05:59 vikumar joined #gluster
05:59 T3 joined #gluster
06:01 suman_d_ joined #gluster
06:05 suman_d joined #gluster
06:06 aravindavk joined #gluster
06:15 vikumar joined #gluster
06:17 anrao joined #gluster
06:20 overclk joined #gluster
06:28 atinmu joined #gluster
06:32 doekia joined #gluster
06:43 atalur joined #gluster
06:51 nshaikh joined #gluster
07:02 mbukatov joined #gluster
07:02 raghu joined #gluster
07:13 suman_d joined #gluster
07:16 sputnik13 joined #gluster
07:17 soumya_ joined #gluster
07:18 jtux joined #gluster
07:18 Elico joined #gluster
07:21 anrao joined #gluster
07:25 victori joined #gluster
07:27 shubhendu_ joined #gluster
07:36 Philambdo joined #gluster
07:41 deniszh joined #gluster
07:47 kovshenin joined #gluster
07:49 bala joined #gluster
07:51 suman_d_ joined #gluster
07:54 suman_d_ joined #gluster
07:59 kanagaraj joined #gluster
08:04 tanuck joined #gluster
08:10 tdasilva joined #gluster
08:11 kkeithley1 joined #gluster
08:12 nishanth joined #gluster
08:19 [Enrico] joined #gluster
08:20 kovshenin joined #gluster
08:25 Elico joined #gluster
08:27 ws2k3 joined #gluster
08:27 ndevos partner: build.gluster.org is our jenkins instance, but currently it does not build rpms/debs
08:28 ws2k3 joined #gluster
08:29 rjoseph joined #gluster
08:35 lalatenduM joined #gluster
08:36 shubhendu joined #gluster
08:36 [Enrico] joined #gluster
08:36 hagarth joined #gluster
08:41 ThatGraemeGuy_ joined #gluster
08:44 ricky-ticky joined #gluster
08:49 kdhananjay joined #gluster
08:50 tanuck joined #gluster
08:50 T0aD joined #gluster
08:52 rjoseph joined #gluster
08:52 liquidat joined #gluster
08:53 PaulCuzner joined #gluster
08:53 dusmant joined #gluster
08:57 victori joined #gluster
09:00 ricky-ticky1 joined #gluster
09:00 atalur joined #gluster
09:04 hagarth joined #gluster
09:04 atalur joined #gluster
09:06 quantum Hi! after rebuid patrition tables on xfs disk, gluster volume crached, and wher i try "gluster volume status BlockStorage1-3", i see "Locking failed on data0. Please check log file for details."
09:07 quantum How to fix it?
09:08 nbalacha joined #gluster
09:09 PaulCuzner joined #gluster
09:12 glusterbot News from newglusterbugs: [Bug 1163543] Fix regression test spurious failures <https://bugzilla.redhat.com/show_bug.cgi?id=1163543>
09:19 ricky-ticky joined #gluster
09:20 atinmu quantum, can u restart glusterd service on data0 and then retry?
09:24 rjoseph joined #gluster
09:24 quantum atinmu: i tried. not work
09:26 Norky joined #gluster
09:26 atinmu quantum, how many nodes u have in the cluster?
09:27 quantum atinmu: 3
09:27 quantum all node UP, iptables off
09:27 quantum tcpdump on all nodes show mode traffic on glusters ports
09:27 atinmu quantum, restart glusterd on all three nodes
09:28 quantum atinmu: i restarted
09:29 blue_ joined #gluster
09:30 atinmu quantum, what is the command u r trying now?
09:30 atalur joined #gluster
09:30 quantum i created a bug - https://bugzilla.redhat.com/show_bug.cgi?id=1189027
09:30 glusterbot Bug 1189027: medium, unspecified, ---, bugs, NEW , Gluster volume crash after rebuild partition table on XFS disk
09:31 kanagaraj joined #gluster
09:32 PaulCuzner left #gluster
09:32 atinmu quantum, u would need to attach glusterd log files of all the nodes in the BZ
09:33 PaulCuzner joined #gluster
09:33 atinmu quantum, which version of gluster r u testing? that information is missing as well
09:36 ninkotech joined #gluster
09:36 ninkotech_ joined #gluster
09:36 DV joined #gluster
09:38 maveric_amitc_ joined #gluster
09:42 glusterbot News from newglusterbugs: [Bug 1189027] Gluster volume crash after rebuild partition table on XFS disk <https://bugzilla.redhat.com/show_bug.cgi?id=1189027>
09:46 pranithk joined #gluster
09:46 RicardoSSP joined #gluster
09:56 quantum atinmu: logs attached, i posted version in bug
09:56 quantum glusterfs-3.6.2-1.el7.x86_64
09:58 ricky-ticky joined #gluster
10:08 ppai joined #gluster
10:20 kkeithley1 joined #gluster
10:24 kkeithley1 joined #gluster
10:26 RicardoSSP joined #gluster
10:27 _shaps_ joined #gluster
10:30 nangthang joined #gluster
10:38 bala joined #gluster
10:39 navid__ joined #gluster
10:46 hchiramm joined #gluster
10:53 atinmu joined #gluster
10:58 ndevos REMINDER: in ~60 minutes from now, the weekly Gluster Community meeting starts in #gluster-meeting
11:02 soumya__ joined #gluster
11:04 jiffin1 joined #gluster
11:12 hagarth joined #gluster
11:15 kanagaraj joined #gluster
11:17 karnan joined #gluster
11:18 meghanam joined #gluster
11:18 deniszh joined #gluster
11:18 harish joined #gluster
11:21 atinmu joined #gluster
11:21 hchiramm joined #gluster
11:22 suman_d joined #gluster
11:30 sahina joined #gluster
11:40 T3 joined #gluster
11:45 ndevos REMINDER: in ~15 minutes from now, the weekly Gluster Community meeting starts in #gluster-meeting
11:48 T3 joined #gluster
11:50 LebedevRI joined #gluster
11:57 diegows joined #gluster
11:59 pkoro joined #gluster
12:00 ndevos REMINDER: the weekly Gluster Community meeting starts now in #gluster-meeting
12:00 jdarcy joined #gluster
12:01 ralala joined #gluster
12:04 rjoseph joined #gluster
12:10 anrao joined #gluster
12:10 soumya_ joined #gluster
12:11 jiffin joined #gluster
12:12 glusterbot News from newglusterbugs: [Bug 1187140] [RFE]: geo-rep: Tool to find missing files in slave volume <https://bugzilla.redhat.com/show_bug.cgi?id=1187140>
12:12 glusterbot News from resolvedglusterbugs: [Bug 1138897] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=1138897>
12:13 Elico1 joined #gluster
12:14 B21956 joined #gluster
12:14 ekman- joined #gluster
12:15 dockbram joined #gluster
12:15 twx joined #gluster
12:16 basso_ joined #gluster
12:17 jinjifra_ joined #gluster
12:20 tom][ joined #gluster
12:20 mkzero_ joined #gluster
12:22 mikedep333 joined #gluster
12:23 Rydekull joined #gluster
12:25 ramteid joined #gluster
12:26 ndarshan joined #gluster
12:27 gem joined #gluster
12:36 overclk joined #gluster
12:44 ira joined #gluster
12:54 RameshN joined #gluster
12:59 anoopcs joined #gluster
13:06 gothos Hey, do you guys have a recommended way to mount a volume on a system with systemd? My currently auto generated $gluster.mount units require only local-fs, hence the system can't mount the glusterfs yet. Using a dedicated unit file creates just a huge amount of pain at the moment.
13:07 T3 joined #gluster
13:08 kshlm gothos, you try setting x.systemd.automount=on in fstab.
13:08 kshlm This way, systemd will only mount, when the path is accessed.
13:08 ricky-ti1 joined #gluster
13:09 kshlm Hopefully, any application you are using only access the path after the boot is done.
13:12 hagarth joined #gluster
13:13 gothos kshlm: okay, interesting! I'll try that and hope it'll works, especially since we ahve various stuff that is automounted from the glusterfs to a local mount point
13:17 kshlm gothos, HTH
13:17 quantum i solved my problem - https://bugzilla.redhat.com/show_bug.cgi?id=1189027
13:17 glusterbot Bug 1189027: medium, unspecified, ---, bugs, CLOSED NOTABUG, Gluster volume crash after rebuild partition table on XFS disk
13:18 rjoseph joined #gluster
13:29 gothos kshlm: seems this will require a reboot, since daemon-reload doesn't result in any usefull change with a changed fstab
13:29 gothos will try friday at the latest
13:35 nshaikh joined #gluster
13:43 glusterbot News from resolvedglusterbugs: [Bug 1189027] Gluster volume crash after rebuild partition table on XFS disk <https://bugzilla.redhat.com/show_bug.cgi?id=1189027>
13:53 kshlm joined #gluster
13:53 Gill joined #gluster
13:57 coredump joined #gluster
14:06 shaunm joined #gluster
14:11 kanagaraj joined #gluster
14:12 bala joined #gluster
14:16 anoopcs joined #gluster
14:17 gothos Uh, I just looked into the looks on my system (3.6.2) and I'm getting a lot of the following in the etc log file: W [socket.c:611:__socket_rwv] 0-management: readv on /var/run/cb35065770bafb8274c78edfffd6151b.socket failed (Invalid argument)
14:17 gothos any idea what on what might be the cause?
14:17 nbalacha joined #gluster
14:18 wkf joined #gluster
14:18 virusuy_ joined #gluster
14:19 deniszh left #gluster
14:19 virusuy_ morning gents
14:19 deniszh joined #gluster
14:19 virusuy_ i've 4 nodes creating  a distributed-replicated volume
14:19 julim joined #gluster
14:19 virusuy_ and one of those node crashed
14:20 anoopcs_ joined #gluster
14:20 virusuy_ i found this documentation about replacing failed nodes
14:20 virusuy_ http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server
14:20 virusuy_ it's correct ?
14:21 calisto joined #gluster
14:23 Manikandan_ joined #gluster
14:24 julim joined #gluster
14:31 anoopcs joined #gluster
14:32 nishanth joined #gluster
14:32 johey joined #gluster
14:32 anoopcs joined #gluster
14:33 anoopcs joined #gluster
14:33 johey Just reading the getting started guide. It says I need 64 bit. Is that a strict requirement or would it go with 32 bit?
14:34 johey I'm thinking of setting up a backup system with a Raspberry Pi 2 cluster.
14:34 anoopcs joined #gluster
14:49 ndevos gothos: could be SElinux?
14:50 ndevos johey: 32-bit on ARM works for me - but I have hit some uncommon bugs because of th 32/64 bit mismatch
14:51 virusuy_ how many nodes can fail simultaneously in a distributed-duplicated 4 node cluster ?
14:51 virusuy_ 2 ?
14:51 ndevos virusuy_: 3.4 is the oldest release that still get updates, so I would suggest http://gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server
14:52 jmarley joined #gluster
14:52 virusuy_ ndevos: thanks, i was reading that how-to right now
14:52 gothos ndevos: SELinux is disabled on the machines
14:53 kkeithley_ johey: 32-bit is fine.  see  http://download.gluster.org/pub/gluster/glusterfs/LATEST/Pidora/ or http://download.gluster.org/pub/gluster/glusterfs/LATEST/Raspbian/
14:53 meghanam joined #gluster
14:53 ndevos kkeithley_: I think the RPi-2 has a ARMv7, not a v6 :)
14:54 ndevos so, the standard Fedora packages should work too, I think
14:55 kkeithley_ I don't know about such a thing as a Raspberry Pi-2.
14:55 dgandhi joined #gluster
14:56 ndevos it got released just a few days ago
14:56 kkeithley_ I know about Model B and Model B+
14:56 kkeithley_ oh, okay
14:56 kkeithley_ Actually shipping yet?
14:56 bennyturns joined #gluster
14:57 lmickh joined #gluster
14:57 tru_tru joined #gluster
14:57 Norky joined #gluster
14:57 gothos yep
14:57 ndevos yeah, see http://www.raspberrypi.org/raspberry-pi-2-on-sale/
14:59 kkeithley_ johey: armv7hl bits at http://download.gluster.org/pub/gluster/glusterfs/LATEST/Fedora/fedora-21/
15:00 kkeithley_ But I'm holding out for inexpensive armv8/aarch64.  There's supposed to be an inexpensive devel board coming soon.
15:03 johey Ok, cool.
15:03 johey Then I'll at least buy a few pieces of Pi2 for evaluation.
15:03 kkeithley_ Well, hold on. I'm not sure Fedora boots on an RPi-2 though
15:04 johey It's ok. I can make my own build of glusterfs if needed.
15:04 atalur joined #gluster
15:05 [Enrico] joined #gluster
15:11 johey When reading about gluster, I find it to be conceptually similar to raid, where striped is like raid 0, replicated like raid 1, striped-replicated like raid 10 and so on. Is there any setup that can be compared to raid 5 or raid 6?
15:13 nishanth joined #gluster
15:16 shubhendu joined #gluster
15:17 kshlm joined #gluster
15:18 julim joined #gluster
15:19 wkf_ joined #gluster
15:20 wushudoin joined #gluster
15:30 plarsen joined #gluster
15:31 ndevos johey: disperse (ec) is more like raid5
15:33 johey ndevos: Ah, thanks! It might be what I'm looking for.
15:33 ndevos johey: see http://www.gluster.org/community/documentation/index.php/Features/disperse
15:35 johey Already googled my way to that place. :)
15:36 kkeithley_ _like_ being the operative word. Stripe is _like_ raid 0. It is _not_ raid 0.
15:37 kkeithley_ Replica is _like_ raid 1. It is _not_ raid 1.
15:39 ndevos johey: the concepts are similar, but the behaviour/performance is quite different, ,,(stripe) explains it to some extend
15:39 glusterbot johey: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
15:42 wkf joined #gluster
15:45 hagarth joined #gluster
15:49 plarsen joined #gluster
15:50 johey glusterbot: Ok, thanks. Makes sense.
15:53 johey Can you extend a Disperse volume with more bricks?
15:54 kkeithley_ xavih: ^^^^
15:58 ndevos johey: yeah, but you have to extend them like distribute does it with afr - add them in pairs
16:00 Lee- joined #gluster
16:01 johey ndevos: Ok. That also goes for the redundancy bricks? I mean, say that I double the number of disperse bricks, do I also need to double the redundancy?
16:03 johey Asking this so I can make a well informed order of hardware. :)
16:04 Philambdo joined #gluster
16:05 ndevos johey: uh, not sure what you mean - but if you start with a 4-brick disperse volume, and you want to add more bricks, you need to add 4 bricks at the time
16:05 diegows joined #gluster
16:05 ndevos its becomses like distribute-disperse then
16:08 soumya__ joined #gluster
16:18 coredump joined #gluster
16:25 CyrilPeponnet joined #gluster
16:32 _dist joined #gluster
16:38 bala joined #gluster
16:38 johey ndevos: Ok, say I have six bricks of which two are redundancy and four are usable. Now I want to double it up. Do I add four or six?
16:39 johey If I understand your distribute-disperse right, I guess I need to add six?
16:39 ndevos johey: thats correct - you will have two sub-volumes that are their own disperse-set, and a distribute-layer on top of them to make it one volume
16:42 johey Ok. So I cannot grow one disperse volume with additional disks, but rather setup another identical one and see them as two distributed sets? That would do I guess.
16:43 glusterbot News from resolvedglusterbugs: [Bug 1119582] glusterd does not start if older volume exists <https://bugzilla.redhat.com/show_bug.cgi?id=1119582>
16:44 johey Can I add yet another 4+2 disperse to an already existing 2(4+2) distribute-disperse volume?
16:44 johey Or will I need to add another 2(4+2) volume to a distribute-distribute-disperse volume?
16:46 MacWinner joined #gluster
16:46 wkf joined #gluster
16:46 ndevos johey: from my understanding, you can not change the layout of a disperse (sub)volume (yet)
16:47 semiosis but you should be able to add several such disperse subvols in a single distribution layer
16:47 semiosis distribute over distribute is not really an option
16:47 semiosis (it is in theory, but it doesnt make sense in practice)
16:48 ndevos yes, multiple (4+2) disperse sub-volumes can be combined with distribite, that works now already - use gluster volume add-brick .... <with many bricks at once>
16:48 sputnik13 joined #gluster
16:48 johey Ok. I think I'm starting to get an understanding. :)
16:50 ndevos semiosis: actually, the tiering that gets into 3.7/4.0 uses distribute-over-distribute :D
16:50 semiosis ndevos: that's wow
16:51 semiosis ndevos: any word on NSR?  is it moving forward?
16:51 ndevos semiosis: I dont know about progress on NSR
16:51 rwheeler joined #gluster
16:51 ndevos semiosis: there is a meeting about 4.0 on Friday, jdarcy did not want to spoil anything in todays meeting
16:52 semiosis hah
16:52 semiosis then there's something to spoil!
16:52 johey Is this correct: For distribute and stripe, you can add any number of bricks at any time. For redundant, you can add more bricks as redundancy but not grow the volume. Disperse can be seen as a brick in sense of adding to existing distributes or stripes.
16:53 ndevos johey: I'm not so sure about stripe in that case
16:53 johey Ok.
16:54 hagarth joined #gluster
16:54 johey Maybe a good idea would be to play around some with vm's before ordering hardware. :)
16:55 jobewan joined #gluster
16:55 johey Good to know there is a helpful irc channel out there.
16:57 codex Running into a starnge problem with gluster 3.6 -- I have 4 systems (2 with 1TB of disk for each brick, and 2 with 500GBs for each brick). They are configured in a distributed-stripped. When mounting the space via a gfs mount, it fills up the 500GBs too 100% and then says out of space
16:57 codex I thought this exact issue was specifically fixed in 3.6?
16:57 diegows joined #gluster
16:58 ndevos johey: this diagram shows distribute-replica, you can swap replica for stripe/disperse - https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html-single/Administration_Guide/index.html#Creating_Distributed_Replicated_Volumes
16:59 tanuck joined #gluster
17:00 tanuck joined #gluster
17:00 ndevos codex: I guess you would have more success without the ,,(stripe)
17:00 glusterbot codex: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
17:00 semiosis hi codex!
17:01 semiosis welcome to #gluster
17:01 codex semiosis: hey! -- didn't see you here :)
17:01 semiosis I get around
17:01 codex ndevos: thanks
17:03 semiosis best practice is to have bricks the same size.  you could use lvm to split up your 1TB bricks
17:03 semiosis also, stripe is usually not what people want
17:04 semiosis what's your use case?
17:05 gem joined #gluster
17:06 B21956 joined #gluster
17:08 B21956 joined #gluster
17:09 johey ndevos: Ok, looking good. I am interested in trying out disperse volumes. They really seem to be what I am looking for. And perhaps also distribute-disperse.
17:11 ndevos johey: cool, xavih is the main developer/maintainer for disperse, and he is very responsive in case you need to file a bug
17:11 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:13 codex semiosis: basically, I want the largest space available out of (1TB) (1TB) (500GB) (500GB), where I can take down ONE of the 1TB servers at any given time OR ONE of the 500GB servers at any time
17:14 semiosis codex: can you say more about your use case?  what kind of workload?  number of files vs. size of files, etc
17:14 johey ndevos: Ok, good. :)
17:15 codex yea - so each server is running KVM and has 2 local disks (raid0) -- we want to use the extra local space to run VMs and be able to have it as a shared meta volume between the servers
17:16 codex but more so, the end goal is get a few "dedicated storage servers" (custom built) and run glusterFS -- again for VMs
17:16 semiosis are the VMs going to mount glusterfs for file access, or is the hypervisor using glusterfs for live disk images?
17:17 johey How do you address the single point of failure problem regarding the fact that you connect the client to one single node?
17:19 codex semiosis: only the hypervisor
17:19 semiosis johey: ,,(mount server)
17:19 glusterbot johey: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
17:19 semiosis johey: furthermore, ,,(rrdns)
17:19 glusterbot johey: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
17:19 B21956 joined #gluster
17:20 johey Very nice.
17:21 codex semiosis: from reading here (https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/sect-User_Guide-Setting_Volumes-Striped_Replicated.html) -- this looks like the ideal scenario, but only if it's smart enough to stop  distributing the 2+4 when  server3+4 are full
17:23 semiosis codex: remember, glusterfs does file storage, not block storage.  if your files continue growing glusterfs can't help you.  it can only avoid placing new files on nearly full bricks
17:23 semiosis and you have no control over whether an image will be placed on the 1TB pair or the 500GB pair (unless you get crafty with filenames and ensure they hash to the correct pair)
17:23 gem_ joined #gluster
17:24 B21956 joined #gluster
17:24 jobewan joined #gluster
17:31 rcampbel3 joined #gluster
17:32 calisto joined #gluster
17:34 Pupeno joined #gluster
17:37 suman_d joined #gluster
17:48 wkf_ joined #gluster
17:51 wkf joined #gluster
18:18 victori joined #gluster
18:18 plarsen joined #gluster
18:18 plarsen joined #gluster
18:21 krullie joined #gluster
18:24 Rapture joined #gluster
18:26 suman_d_ joined #gluster
18:29 lalatenduM joined #gluster
18:31 virusuy joined #gluster
18:32 wkf_ joined #gluster
18:34 PeterA joined #gluster
18:36 wkf joined #gluster
18:38 Philambdo joined #gluster
18:39 maveric_amitc_ joined #gluster
18:39 sputnik13 joined #gluster
18:40 B21956 joined #gluster
18:42 SOLDIERz joined #gluster
18:59 shaunm joined #gluster
19:01 codex semiosis: correct -- i pre-provision all vmdk's to max size
19:01 codex but other than that, does the design make sense, and shouldn't it be smart enough to disbtribute evently, until one is full and then use the rest of the space?
19:03 semiosis i still recommend splitting your 1TB drives with lvm so you have six 500GB bricks
19:03 semiosis but yes, glusterfs is smart enough to try to place files on different bricks than they should be placed if the proper bricks are almost full
19:03 semiosis but that's not optimal
19:04 ira joined #gluster
19:11 codex oh interesting
19:12 wkf joined #gluster
19:12 shaunm joined #gluster
19:21 wkf joined #gluster
19:28 deniszh joined #gluster
19:30 Arminder- joined #gluster
19:34 deniszh joined #gluster
19:36 deniszh1 joined #gluster
19:36 plarsen joined #gluster
19:38 B21956 left #gluster
19:41 jobewan joined #gluster
19:41 T3 joined #gluster
19:42 B21956 joined #gluster
19:43 deniszh joined #gluster
19:49 bala joined #gluster
19:49 kkeithley1 joined #gluster
19:59 diegows joined #gluster
20:02 B21956 left #gluster
20:04 B21956 joined #gluster
20:12 telmich good evening
20:13 telmich if I setup a new replicated volume on ubuntu 14.04, which are the recommended packages / versions?
20:13 telmich i.e. should I go with 3.6ppa from https://launchpad.net/~gluster or stay with 3.4.2 or even 3.5 from ppa?
20:16 blue_vd joined #gluster
20:18 wkf joined #gluster
20:23 B21956 joined #gluster
20:26 deniszh joined #gluster
20:27 Philambdo joined #gluster
20:28 mbelaninja joined #gluster
20:29 bene2 joined #gluster
20:30 mbelaninja Hello everyone.  I'm trying to make our gluster-backed virtual machines a bit more resiliant to general network badness.  Our vms are using virtio-scsi and have a 300 sec timeout on the disks.  This should prevent them from going ro immediately but if I stop the network on the hypervisor (to simlulate loss of gluster connection) they go read only immediately
20:30 mbelaninja any operations on the gluster mountpoint fromthe hypervisor return an inaccessible message right away
20:31 mbelaninja is it possible to have the hypervisor queue this io instead of immediately failing?
20:31 mbelaninja (fuse.glusterfs mount)
20:35 wkf joined #gluster
20:43 tanuck joined #gluster
20:43 bala joined #gluster
20:44 redbeard joined #gluster
20:50 n-st joined #gluster
20:51 B21956 joined #gluster
20:51 h4rry_ joined #gluster
20:55 T3 joined #gluster
20:58 bet_ joined #gluster
20:59 wkf joined #gluster
21:08 ralala joined #gluster
21:23 ralalala joined #gluster
21:25 JoeJulian mbelaninja: You should not be getting an EACCES. Are you using libgfapi? Have you checked the client log? Are you sure the client is connecting to all replicas? Are you using a replicated volume?
21:25 JoeJulian telmich: 3.6ppa
21:26 mbelaninja yes, libgfapi
21:27 mbelaninja specific error is: Transport endpoint is not connected
21:27 mbelaninja which is returned immediately
21:27 mbelaninja instead of queueing the IO until gluster returns (which is my preference)
21:28 JoeJulian If you have a replicated volume, you should get your expected behavior.
21:28 JoeJulian If you're not getting that, something is wrong. Usually network related.
21:28 mbelaninja # gluster volume info
21:28 mbelaninja Volume Name: gvol
21:28 mbelaninja Type: Distributed-Replicate
21:30 mbelaninja Since I'm effectively severing network connectivity to the gluster client, how would having additional replica paths help?
21:31 JoeJulian Ah, if you're severing all network connectivity then you are experiencing the correct behavior.
21:31 mbelaninja there is no way to make the client queue IO during a network outage?
21:32 mbelaninja (end goal is to prevent the VMs on that hypervisor from going readonly if the network blips)
21:32 JoeJulian Well, you will have ping-timeout before that happens.
21:33 mbelaninja there's no queue_if_no_path equivalent ?
21:33 JoeJulian not unless I've missed something.
21:34 mbelaninja hrm, ok
21:34 JoeJulian Everything just hangs there waiting for the tcp connection for up to ping-timeout (42 seconds by default).
21:34 mbelaninja can I crank that ping-timeout up to like 6-7 minutes?
21:34 JoeJulian Sure
21:35 JoeJulian gluster volume set help
21:35 mbelaninja that should give me the behavior I desire then?  Hidden gotchas?
21:35 mbelaninja (I don't anticipate losing network for 6 minutes)
21:36 JoeJulian I can't think of any. The only gotcha that comes to mind would be 30 minutes out.
21:36 mbelaninja if it's down for 30 mins I deserve to go readonly :D
21:36 mbelaninja ty JoeJulian
21:36 JoeJulian You're welcome.
21:37 JoeJulian If you think that an improvement could be made to that, feel free to file a bug report asking for an enhancement.
21:37 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
21:37 mbelaninja i'll experiment with the ping-timeout and go from there
21:37 mbelaninja that may be sufficient
21:38 wkf_ joined #gluster
21:50 ralalala joined #gluster
22:05 dgandhi joined #gluster
22:10 Gill joined #gluster
22:10 Philambdo joined #gluster
22:17 ralalala joined #gluster
22:18 gildub joined #gluster
22:23 jmarley joined #gluster
22:23 jmarley joined #gluster
22:28 ralalala joined #gluster
22:29 Gill joined #gluster
22:29 DV joined #gluster
22:50 cbrx01 joined #gluster
22:52 edwardm61 joined #gluster
22:54 jobewan joined #gluster
23:06 wkf joined #gluster
23:14 Pupeno joined #gluster
23:14 Pupeno joined #gluster
23:23 rcampbel3 joined #gluster
23:25 B21956 joined #gluster
23:28 jmarley joined #gluster
23:36 B21956 joined #gluster
23:41 Pupeno joined #gluster
23:42 Pupeno joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary