Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:21 bennyturns joined #gluster
00:22 MugginsM joined #gluster
00:26 DV joined #gluster
00:27 RameshN joined #gluster
00:30 timotheus1 joined #gluster
00:52 amye joined #gluster
01:01 haomaiwa_ joined #gluster
01:06 MugginsM our gluster, erm, cluster seems unwell - we've got a 2 server replicated setup, and freshly created files take a while to appear on the other server
01:06 MugginsM it looks like it's healing continually, rather than syncing when the files are written
01:07 MugginsM my current suspicion is that the server is 3.6.8 and the clients are 3.7.9, is that likely?
01:07 MugginsM (suspicion that that's the cause)
01:19 baojg joined #gluster
01:24 gbox MugginsM: Yeah there need to be more tests on compatibility considering how fast gluster is moving developmently
01:24 MugginsM our problem is that our servers are Ubuntu Precise, and there aren't 3.7 builds available
01:25 MugginsM (some) of our servers
01:25 MugginsM so we have several sites, all on 3.7.9 happily. But one site has Precise servers
01:25 gbox MugginsM: I heard there was a PPA
01:26 MugginsM haven't seen one for 3.7.x
01:26 gbox MugginsM:  You almost want to track the latest since a lot of bugfixes are not backported (unless it's RHS)
01:26 MugginsM yes, that's why we track the latest stable where possible
01:27 gbox MugginsM:  Sorry I'm on the Centos/RHEL side with Gluster (Debian/Ubuntu at home! :)
01:27 MugginsM if there's a decent Ubuntu Precise (12.04 I think) build of 3.7.9 that'd be happy time
01:27 MugginsM just want to make sure it's the problem here though, before I downgrade all the clients
01:27 gbox MugginsM: https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7 ?
01:27 glusterbot Title: glusterfs-3.7 : “Gluster” team (at launchpad.net)
01:28 MugginsM yeah, they don't have builds for that version of Ubuntu
01:28 MugginsM it's "stable", but old
01:28 MugginsM I tried building them manually, I've done that in the past, but seems like it's not going to be easy
01:29 gbox MugginsM:  Always my worry with Debian.  I have a one-year-old version of Chrome on SID that Google has already disowned.
01:29 Lee1092 joined #gluster
01:29 MugginsM unfortunately we have this one site in an "enterprise data center"
01:29 MugginsM upgrading OS is a multi-year plan
01:29 MugginsM with all sorts of paperwork
01:30 MugginsM it's scary just how nice our AWS based sites are in comparison :)
01:30 gbox MugginsM:  What OS is the DC running?
01:31 MugginsM the two file servers are Ubuntu 12.04, all the others are Ubuntu 14.04
01:31 MugginsM we've only just finished upgrading all the clients from 10.04 to 14.04
01:32 MugginsM seriously, it's like 18  months to do an upgrade like that in that kind of setup
01:32 MugginsM on AWS it's maybe a couple of days, mostly for testing
01:33 MugginsM but government client, and data sovereignity issues - no "cloud" in New Zealand
01:33 gbox MugginsM:  Well post-factum verified 3.7.* compatibility (for any client-server combo) but you'd have to search for minor version differences.  Seems like there might be some.
01:33 MugginsM I can't even find release notes for 3.6.9 never mind anything about changes from 3.6 to 3.7 :-/
01:34 MugginsM It might not be the problem, just it's the only thing I can think of at the moment
01:34 EinstCrazy joined #gluster
01:34 gbox MugginsM:  Ha, yeah it goes 3.6.0 to 3.6.3 to 3.7.0
01:35 MugginsM it's cause us quite a lot of customer embarrassment
01:35 gbox MugginsM: well look at 3.7.0 and see what that says
01:35 MugginsM because the effect is files vanishing
01:35 MugginsM process writes files, zips them up, sends customer zip. half the files are missing
01:35 MugginsM we've got a workaround for that to check everything carefully, but it's not very comforting
01:37 gbox MugginsM: nevermind, not helpful: https://gluster.readthedocs.org/en/latest/Upgrade-Guide/Upgrade%20to%203.7/
01:37 glusterbot Title: Upgrade to 3.7 - Gluster Docs (at gluster.readthedocs.org)
01:37 MugginsM I might just downgrade the whole lot to 3.6.9 to be safe
01:37 MugginsM yeah, I've seen nothing about running 3.7 clients with a 3.6 server
01:39 gbox Muggins: I hate to say it but I've seen that kind of thing happening even with the client & server both on 3.7.6
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:57 johnmilton joined #gluster
01:59 gbox MugginsM:  What happens when you run gluster v heal VOLNAME info
02:00 MugginsM shows a bunch of file "Possibly undergoing heal"
02:00 MugginsM the files change over time
02:00 MugginsM stuff that's likely to be being written right now
02:00 gbox MugginsM:  Wow that is unusual
02:00 gbox MugginsM: This is a DC so 10Gbps yes?
02:00 MugginsM 4Gbps :)
02:01 gbox MugginsM: OK good enough
02:01 EinstCra_ joined #gluster
02:01 MugginsM the two servers have a private 10Gbps link between them that gluster syncs over
02:01 MugginsM so it's probably just disk speed
02:01 MugginsM no sign of overload anywhere - network is happy, disk IO is pretty happy
02:01 MugginsM when a file writes, it seems to want to heal it rather than write it twice in parallel
02:02 MugginsM network shows about the right amount of data going to both servers
02:02 gbox MugginsM:  Heal should not be the normal course of action for AFR
02:02 MugginsM yep
02:02 MugginsM is why I'm wondering if it's a compatibility issue
02:02 gbox MugginsM:  You could bug the devs on the other channel.  Seems like they would know
02:03 gbox MugginsM:  Good luck, post back!
02:13 haomaiwa_ joined #gluster
02:18 vmallika joined #gluster
02:19 nhayashi joined #gluster
02:25 luizcpg joined #gluster
02:37 MugginsM joined #gluster
02:48 luizcpg joined #gluster
02:59 skoduri joined #gluster
03:01 haomaiwa_ joined #gluster
03:15 atalur joined #gluster
03:17 nishanth joined #gluster
03:19 PaulCuzner left #gluster
03:34 overclk joined #gluster
03:49 amye joined #gluster
03:49 atinm joined #gluster
03:51 sakshi joined #gluster
03:54 edong23 joined #gluster
03:59 shubhendu joined #gluster
03:59 nbalacha joined #gluster
04:00 gem joined #gluster
04:01 7YUAAP3RE joined #gluster
04:06 RameshN joined #gluster
04:07 kshlm joined #gluster
04:24 nishanth joined #gluster
04:25 rafi1 joined #gluster
04:26 chirino_ joined #gluster
04:27 spalai joined #gluster
04:27 spalai left #gluster
04:42 hgowtham joined #gluster
04:43 karthik___ joined #gluster
04:43 jiffin joined #gluster
04:48 jiffin joined #gluster
04:53 vmallika joined #gluster
04:56 aspandey joined #gluster
04:58 poornimag joined #gluster
05:01 haomaiwang joined #gluster
05:03 jiffin1 joined #gluster
05:10 jiffin1 joined #gluster
05:12 ndarshan joined #gluster
05:15 karnan joined #gluster
05:21 jiffin joined #gluster
05:21 gowtham joined #gluster
05:25 jiffin1 joined #gluster
05:25 ahino joined #gluster
05:29 aravindavk joined #gluster
05:29 Bhaskarakiran joined #gluster
05:36 jiffin joined #gluster
05:36 spalai joined #gluster
05:38 MugginsM joined #gluster
05:39 Manikandan joined #gluster
05:41 Saravanakmr joined #gluster
05:48 jiffin1 joined #gluster
05:49 ahino joined #gluster
05:49 mhulsman joined #gluster
05:53 mzink left #gluster
05:53 geniusoftime_ joined #gluster
05:54 geniusoftime_ I have a distributed + replicated volume, replicas 2. Is there anyway to remove bricks without data loss . (ie migrate the data off the bricks before moving them)
05:56 skoduri joined #gluster
06:01 haomaiwa_ joined #gluster
06:02 ppai joined #gluster
06:16 prasanth joined #gluster
06:18 mhulsman joined #gluster
06:19 ggarg joined #gluster
06:20 atalur joined #gluster
06:23 geniusoftime_ the remove brick command with start parameter will do what I want.
06:24 ashiq joined #gluster
06:30 jtux joined #gluster
06:30 pur joined #gluster
06:32 kshlm joined #gluster
06:32 rastar joined #gluster
06:45 nangthang joined #gluster
07:01 haomaiwa_ joined #gluster
07:07 gem joined #gluster
07:09 jtux joined #gluster
07:10 JesperA joined #gluster
07:11 jri joined #gluster
07:18 deniszh joined #gluster
07:23 harish joined #gluster
07:23 Slashman joined #gluster
07:29 mhulsman joined #gluster
07:30 fsimonce joined #gluster
07:30 jwd joined #gluster
07:36 ahino joined #gluster
07:39 purpleidea joined #gluster
07:39 ctria joined #gluster
07:39 kshlm joined #gluster
07:40 gem joined #gluster
07:41 jwaibel joined #gluster
07:42 [Enrico] joined #gluster
07:44 MugginsM joined #gluster
07:55 goretoxo joined #gluster
07:57 jiffin joined #gluster
08:01 haomaiwa_ joined #gluster
08:01 harish joined #gluster
08:03 farblue joined #gluster
08:05 Gnomethrower joined #gluster
08:06 Norky joined #gluster
08:06 jiffin joined #gluster
08:10 arcolife joined #gluster
08:11 jiffin1 joined #gluster
08:23 spalai joined #gluster
08:26 arcolife joined #gluster
08:27 [diablo] joined #gluster
08:31 vmallika joined #gluster
08:34 owlbot joined #gluster
08:38 [Enrico] joined #gluster
08:42 kdhananjay joined #gluster
08:45 itisravi joined #gluster
08:48 jiffin joined #gluster
08:50 Romeor hmm
08:50 Romeor broken dependencies on debian jessie gluster 3.6.9
08:54 Romeor mailed dev list
08:59 JesperA- joined #gluster
09:00 nbalacha joined #gluster
09:01 haomaiwa_ joined #gluster
09:02 spalai joined #gluster
09:19 kovshenin joined #gluster
09:24 Vigdis joined #gluster
09:28 TvL2386 joined #gluster
09:29 Vigdis Hi, I'm running glusterfs 3.7.6 and gluster volume heal datas info split-brain returns a lot of lines like <gfid:2f788e0c-07b4-43a1-8e6c-7acf6c2e9b28>. Should I follow https://gluster.readthedocs.org/en/latest/Troubleshooting/split-brain/ ?
09:29 glusterbot Title: Split Brain - Gluster Docs (at gluster.readthedocs.org)
09:45 Romeor any help on sharing gluster volume via samba using vfs on debain?
09:46 ndarshan joined #gluster
09:48 itisravi joined #gluster
09:49 jiffin1 joined #gluster
09:59 d0nn1e joined #gluster
10:01 haomaiwang joined #gluster
10:03 kdhananjay joined #gluster
10:03 jwd joined #gluster
10:04 gessitin left #gluster
10:06 hackman joined #gluster
10:06 jiffin joined #gluster
10:06 legreffier joined #gluster
10:07 legreffier hi
10:07 glusterbot legreffier: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:07 Romeor where do i get samba vfs module for debian jessie?
10:08 legreffier this commands are outdated -> http://www.gluster.org/community/documentation/index.php/Gluster_3.1:_Setting_the_Log_Directory , this page won't say how to switch from /var/log to whatever i want : http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Logging/
10:08 glusterbot Title: Logging - Gluster Docs (at gluster.readthedocs.org)
10:08 legreffier any way i can set that up ? i don't really wanna make symlinks and such.
10:09 skoduri joined #gluster
10:12 jiffin1 joined #gluster
10:15 legreffier joined #gluster
10:15 legreffier joined #gluster
10:15 legreffier re , sorry , my friends box hicced up
10:15 jiffin joined #gluster
10:17 aspandey joined #gluster
10:17 fcami joined #gluster
10:19 atalur joined #gluster
10:19 Manikandan joined #gluster
10:20 nbalacha joined #gluster
10:24 jiffin joined #gluster
10:51 Rasathus_ joined #gluster
10:59 unlaudable joined #gluster
11:00 poornimag joined #gluster
11:01 64MAARS36 joined #gluster
11:03 nbalacha joined #gluster
11:18 scobanx joined #gluster
11:34 kkeithley dscastro: I would think you could use a load balancer in front of nfs-ganesha. I haven't tried it myself, but I don't see why it wouldn't work.  Strictly speaking it wouldn't be HA.  You don't have to have/use HA in order to use nfs-ganesha fronting Gluster volumes.
11:37 Romeor if someone got experience on building samba-vfs-glusterfs module under debian jessie, pvt me pls
11:43 bfoster joined #gluster
11:48 ppai joined #gluster
11:49 itisravi joined #gluster
11:50 kdhananjay1 joined #gluster
11:53 mhulsman joined #gluster
11:55 kkeithley Gluster Community Meeting in T-minus Five minutes and counting in #gluster-meeting (on freenode)
11:56 atalur joined #gluster
11:59 jiffin joined #gluster
11:59 johnmilton joined #gluster
12:01 skoduri joined #gluster
12:01 skoduri joined #gluster
12:01 haomaiwa_ joined #gluster
12:02 jiffin1 joined #gluster
12:03 jdarcy joined #gluster
12:03 mhulsman joined #gluster
12:07 hchiramm_ joined #gluster
12:07 vmallika joined #gluster
12:07 mhulsman joined #gluster
12:08 jiffin1 joined #gluster
12:09 spalai joined #gluster
12:10 atalur joined #gluster
12:11 rastar joined #gluster
12:15 jiffin1 joined #gluster
12:17 ira joined #gluster
12:18 simon-dev joined #gluster
12:21 simon-dev hi there, i'm having trouble getting glusterfs server (3.2.7) to work on debian (8.3 - jessie). i already have a gluster server (3.2.7) running fine on debian (7.1 - wheezy). Are there any known compatability issues with running 3.2.7 on jessie?
12:22 jiffin1 joined #gluster
12:23 [Enrico] joined #gluster
12:23 kkeithley #repos
12:23 ppai joined #gluster
12:23 ndevos @repos
12:23 glusterbot ndevos: See @yum, @ppa or @git repo
12:24 kkeithley @ppa
12:24 glusterbot kkeithley: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
12:24 ndevos ppa for debian?
12:25 ndevos simon-dev: anyway, 3.2.7 is *really* old, and many issues are not fixed there, I strongly recommend you to plan to update to a more recent version
12:26 bluenemo joined #gluster
12:27 simon-dev ndevos: thanks for the tip, will feed back to the sysadmin. i've narrowed the issue down to glusterd: 0-rpc-transport/rdma: Failed to get IB devices. I cannot find anything online that aids this error. Do you know anymore about this error?
12:27 jiffin joined #gluster
12:27 ndevos simon-dev: that should not be a critical error if you do not use rdma, things should continue regardless
12:30 kdhananjay joined #gluster
12:31 jiffin1 joined #gluster
12:32 kkeithley hmmm.  We don't have an info item for the Debian repos on download.gluster.org.
12:33 Hesulan joined #gluster
12:36 arcolife joined #gluster
12:38 wnlx joined #gluster
12:39 julim joined #gluster
12:42 Manikandan joined #gluster
12:47 jiffin1 joined #gluster
12:50 jiffin joined #gluster
12:51 hchiramm_ joined #gluster
12:56 Rasathus joined #gluster
12:56 nbalacha joined #gluster
12:59 harish_ joined #gluster
13:01 haomaiwa_ joined #gluster
13:04 jiffin joined #gluster
13:08 shyam joined #gluster
13:08 mpietersen joined #gluster
13:09 mpietersen joined #gluster
13:10 Manikandan joined #gluster
13:12 mpietersen joined #gluster
13:13 mpietersen joined #gluster
13:14 beeradb joined #gluster
13:15 mpietersen joined #gluster
13:15 jiffin1 joined #gluster
13:23 ndarshan joined #gluster
13:23 jiffin joined #gluster
13:24 simon-dev ndevos: turns out the issue was down to iptables. thank you for you help & time :)
13:24 overclk joined #gluster
13:33 hchiramm joined #gluster
13:33 hchiramm_ joined #gluster
13:37 jiffin joined #gluster
13:37 BitByteNybble110 joined #gluster
13:41 coredump joined #gluster
13:49 mhulsman joined #gluster
13:51 Gnomethrower joined #gluster
13:52 simon-dev Am i able to add-brick to a replicate volume, whilst changing the replica on 3.2.7? the volume was setup with replica 2, and currently as 2 bricks. i need to add a 3rd brick.
13:53 simon-dev or would i have to umount, stop the volume then create a new one with the corrected replica, old bricks + the new brick?
13:58 sage joined #gluster
13:59 jiffin1 joined #gluster
13:59 EinstCrazy joined #gluster
13:59 EinstCrazy joined #gluster
14:01 haomaiwa_ joined #gluster
14:07 legreffier i get a shit-ton of these logs , on gluster 3.7.10 :
14:07 legreffier [2016-04-06 14:06:56.384237] I [dict.c:473:dict_get] (-->/usr/lib64/libglusterfs.so.0(default_getxattr_cbk+0xc2) [0x7f212f048412] -->/usr/lib64/glusterfs/3.7.10/xlator/features/marker.so(marker_getxattr_cbk+0xde) [0x7f211b7b862e] -->/usr/lib64/libglusterfs.so.0(dict_get+0x63) [0x7f212f031563] ) 0-dict: !this || key=() [Invalid argument]
14:08 glusterbot legreffier: ('s karma is now -130
14:08 legreffier (heh it didn't seem that long in file)
14:10 EinstCrazy joined #gluster
14:13 Debloper joined #gluster
14:13 wushudoin joined #gluster
14:15 Hesulan joined #gluster
14:17 Hesulaan joined #gluster
14:17 legreffier no-one experienced these ?
14:18 arcolife joined #gluster
14:18 bwerthmann joined #gluster
14:19 Hesulaan joined #gluster
14:19 Hesulaan joined #gluster
14:20 rwheeler joined #gluster
14:22 jiffin1 joined #gluster
14:22 Hesulan joined #gluster
14:24 bwerthma1n joined #gluster
14:27 hchiramm joined #gluster
14:31 jiffin joined #gluster
14:33 farblue I have a replica-3 gluster volume and I’d like to replace one of the bricks. I have a space server with a brick ready to go. Can I just use replace-brick?
14:42 jiffin joined #gluster
14:42 atinm joined #gluster
14:43 gowtham joined #gluster
14:45 jiffin1 joined #gluster
14:49 syadnom_ farblue, I believe so
14:50 farhorizon joined #gluster
14:50 syadnom_ farblue, but you'll be down to 2 replicas until you run rebalance...
14:55 jiffin1 joined #gluster
14:57 farblue syadnom_ is there a better way?
14:57 farblue and is there a less verbose way to check progress than volume heal info?
14:58 syadnom_ I'm no expert here... but not that I know of.  You'll have to get the data to the new brick somehow, why not use the built in mechanism and save yourself unforseen headaches?
14:59 farblue yeah, I’m doing it now
15:00 bennyturns joined #gluster
15:00 Hesulan joined #gluster
15:06 julim joined #gluster
15:07 jiffin1 joined #gluster
15:09 skylar joined #gluster
15:11 eagles0513875_ hey guys how would one go about installing gluster on ones VPS for example that is already formatted with ext4
15:15 farblue you can use Gluster on ext4 although there are some concerns about xattr size and space
15:15 eagles0513875_ farblue: what is recommended practice
15:15 eagles0513875_ what filesystem should be used?
15:16 eagles0513875_ as i havent quite figured out how it works
15:16 eagles0513875_ i was under the impression farblue that one would need to format their drives with gluster
15:16 farblue well, recommended filesystem is XFS - as I discovered after building a setup with ext4
15:16 eagles0513875_ ok
15:16 farblue the recommendation is to use dedicated disks for gluster ‘bricks’
15:17 eagles0513875_ humm
15:17 eagles0513875_ ok
15:17 farblue but if performance isn’t much of an issue you could just use a folder on your ext4 system
15:17 eagles0513875_ farblue: well i want something scalable and highly performant if possible
15:18 farblue if you are using LVM you could shrink your ext4 partition and create a new lv with XFS
15:18 farblue although, tbh, if you are using a VPS and virtualised access to the underlying storage I don’t think you will ever get the best performance
15:18 Gnomethrower joined #gluster
15:24 harish_ joined #gluster
15:35 overclk joined #gluster
15:41 nishanth joined #gluster
15:44 kpease joined #gluster
15:48 atinm joined #gluster
15:51 j8kster joined #gluster
15:55 kkeithley legreffier: what version?
15:55 kkeithley oh,nm   I see 3.7.10
16:04 nbalacha joined #gluster
16:14 amye joined #gluster
16:14 DV joined #gluster
16:19 jmarley joined #gluster
16:22 bennyturns joined #gluster
16:23 hchiramm joined #gluster
16:23 pur joined #gluster
16:29 robb_nl joined #gluster
16:41 jiffin1 joined #gluster
16:44 vmallika joined #gluster
16:44 spalai joined #gluster
16:50 gem joined #gluster
16:52 jiffin joined #gluster
17:00 coredump joined #gluster
17:03 skylar joined #gluster
17:08 jiffin1 joined #gluster
17:09 bwerthmann joined #gluster
17:15 arcolife joined #gluster
17:19 jiffin1 joined #gluster
17:20 chirino joined #gluster
17:22 arcolife joined #gluster
17:22 luizcpg joined #gluster
17:25 jwaibel joined #gluster
17:33 gem joined #gluster
17:37 shubhendu joined #gluster
17:37 jiffin1 joined #gluster
17:40 jiffin joined #gluster
17:41 kuyot joined #gluster
17:52 Gnomethrower joined #gluster
17:53 jiffin joined #gluster
17:54 kuyot what is the best way to transfer 4TB of small files to a gluster server? And am I asking for trouble with that type of data?
17:54 kuyot small files as in under 100K
17:56 kkeithley maybe https://github.com/gluster/glusterfs-coreutils/
17:56 glusterbot Title: GitHub - gluster/glusterfs-coreutils: Tools that work directly on Gluster volume, inspired by the standard coreutils. (at github.com)
17:58 ovaistariq joined #gluster
18:01 jiffin1 joined #gluster
18:02 shyam joined #gluster
18:04 jiffin joined #gluster
18:06 skoduri joined #gluster
18:09 jiffin1 joined #gluster
18:12 _gbox kkeithley: That's very cool.  Have you used it?
18:14 gbox kuyot:  cp or rsync work well with GNU parallel (you can throttle the transfer up/down).  There are some rsync examples for gluster out there.
18:15 jiffin1 joined #gluster
18:18 bennyturns joined #gluster
18:21 jiffin joined #gluster
18:24 jiffin1 joined #gluster
18:32 om2 joined #gluster
18:39 jiffin1 joined #gluster
18:42 jiffin joined #gluster
18:45 deniszh joined #gluster
18:47 jiffin1 joined #gluster
18:50 shubhendu joined #gluster
19:00 sagarhani joined #gluster
19:09 bennyturns joined #gluster
19:12 samppah has anyone here tested libgfapi with ovirt?
19:12 samppah it's still not officially supported but i've been testing it with attaching device through libvirt.. however for some reason performance seems quite low
19:18 ovaistariq joined #gluster
19:32 gbox samppah: Is libvirt supported? but not ovirt?  libvirt on gluster is glitchy too.  Seems to be WIP: https://www.youtube.com/watch?v=z871u7mtUB4
19:33 virusuy Hi
19:33 glusterbot virusuy: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:33 virusuy We have a 4 nodes distributed-replicated volume with gluster 3.7.10
19:33 samppah gbox: yes, libvirt has support for libgfapi
19:34 virusuy and for some reason, when we copy files to the volume, those are located in a node (and it's replicate node)
19:34 virusuy but isn't copy on the other pair of nodes
19:34 virusuy It's behaving like it's a replica volume instead a distributed-replicated
19:48 ctria joined #gluster
19:55 john51_ joined #gluster
19:57 julim joined #gluster
19:58 post-factum virusuy: rebalance needed?
19:58 virusuy post-factum: we run rebalance but nothing happened
19:59 virusuy ran*
19:59 whiteadam joined #gluster
19:59 arthurl joined #gluster
20:00 whiteadam Is there, per chance, a site/book/novela with a simple explanation of gluster and how it works, etc. Having a hard time wrapping my head around it.
20:00 arthurl whiteadam https://www.youtube.com/watch?v=HkBndZOcEA0
20:01 whiteadam arthurl: oh man, thanks so much. +1 kitten for you
20:02 arthurl you're very welcome
20:04 virusuy post-factum: wow, i just ran rebalance again and seems like it's working now
20:04 virusuy post-factum: but rebalance shouldn't be executed manually , right ?
20:12 Hesulan joined #gluster
20:13 post-factum virusuy: unless you add new bricks
20:13 virusuy post-factum: yes
20:16 Wizek joined #gluster
20:18 Hesulan joined #gluster
20:33 ndevos gbox: libvirt makes it possible to run VMs with a gluster://... url for images, but oVirt does not use that, it configures VMs to run over fuse mounts
20:39 jlp1 joined #gluster
20:46 rwheeler joined #gluster
20:47 arthurl hi guys- i'm running centos 6
20:48 arthurl is glusterfs considered a 'production-grade' tool for centos?
20:50 gbox ndevos:  Thx, so libvirt should perform better?
20:54 gbox arthurl: depends on what you have in production.  It has great features & good performance.  Development is a bit of a moving target but if you spend enough time in this channel it should all work :)
20:55 arthurl gbox cool thanks :)
20:55 arthurl i'm trying to make a decision on weather or not to put this into our production webserver stack
21:00 gbox arthurl: for redundancy?  scalability?
21:03 arthurl gbox redundancy mostly
21:03 arthurl we have some webserver clusters i'd like to switch over to glusterfs
21:10 MugginsM joined #gluster
21:16 virusuy post-factum: the most weird thing is : if you copy single files on the volume , works like a replica volume, but if you copy a folder with files inside , it work just well
21:37 whiteadam left #gluster
21:37 hackman joined #gluster
21:42 Guest53161 joined #gluster
21:49 jlp1 i cannot for the life of me get one of my hosts back in in my cluster.  it keeps showing a status of Peer Rejected (Connected).  This host has the lone brick for one of the volumes. any ideas?
21:50 jlp1 can i force remove the host from the cluster, and readd it or will i lose that volume?
22:09 hagarth joined #gluster
22:18 hagarth joined #gluster
22:18 social joined #gluster
22:19 ovaistariq joined #gluster
22:38 ovaistariq joined #gluster
22:42 R0ok_ joined #gluster
23:03 amye joined #gluster
23:12 MugginsM joined #gluster
23:13 kuyot joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary