Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-12-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 timotheus1_ joined #gluster
00:30 haomaiwang joined #gluster
00:31 susant joined #gluster
00:43 ashp left #gluster
00:58 jeremyh joined #gluster
01:05 dnorman joined #gluster
01:15 shdeng joined #gluster
01:27 suliba joined #gluster
01:36 haomaiwang joined #gluster
02:04 phileas joined #gluster
02:05 derjohn_mobi joined #gluster
02:05 jiffin joined #gluster
02:14 haomaiwang joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:50 jeremyh joined #gluster
03:14 haomaiwang joined #gluster
03:29 amye joined #gluster
03:29 atinm joined #gluster
03:34 Lee1092 joined #gluster
03:36 magrawal joined #gluster
03:41 riyas joined #gluster
03:51 nbalacha joined #gluster
03:54 Wizek_ joined #gluster
03:56 ahino joined #gluster
04:08 buvanesh_kumar joined #gluster
04:09 RameshN joined #gluster
04:14 haomaiwang joined #gluster
04:23 itisravi joined #gluster
04:24 vbellur joined #gluster
04:25 jiffin joined #gluster
04:29 jeremyh joined #gluster
04:35 poornima_ joined #gluster
04:41 sbulage joined #gluster
04:48 sanoj joined #gluster
04:53 jiffin joined #gluster
04:56 hgowtham joined #gluster
04:58 kramdoss_ joined #gluster
05:00 Wizek_ joined #gluster
05:01 Prasad joined #gluster
05:02 Karan joined #gluster
05:09 ndarshan joined #gluster
05:12 shdeng joined #gluster
05:14 haomaiwang joined #gluster
05:15 Muthu joined #gluster
05:17 ppai joined #gluster
05:20 daMaestro joined #gluster
05:20 hgowtham joined #gluster
05:22 javi404 joined #gluster
05:23 karthik_us joined #gluster
05:36 skoduri joined #gluster
05:43 kdhananjay joined #gluster
05:48 hackman joined #gluster
05:48 aravindavk joined #gluster
05:51 ashiq joined #gluster
05:55 hgowtham joined #gluster
05:56 kusznir joined #gluster
05:58 kusznir Hi all: Question on gluster: is it possible to add and remove nodes to a replicated (only) configuration?  I'm looking at gluster as the storage system for oVirt cluster, and am wondering if its possible to add nodes down the road or not..
05:58 kusznir Or related question: how hard is it to "migrate" from one node to another (eg, retiring a piece of hardware in favor of a newer/better one or replacing a failed hardware)?
05:59 Philambdo joined #gluster
06:01 susant joined #gluster
06:04 wushudoin joined #gluster
06:14 haomaiwang joined #gluster
06:20 ndarshan joined #gluster
06:20 susant joined #gluster
06:27 sbulage joined #gluster
06:31 k4n0 joined #gluster
06:40 nishanth joined #gluster
06:42 jtux joined #gluster
06:50 Philambdo joined #gluster
07:03 kramdoss_ joined #gluster
07:03 azilian joined #gluster
07:11 jtux joined #gluster
07:14 haomaiwang joined #gluster
07:15 mhulsman joined #gluster
07:27 mhulsman joined #gluster
07:36 mhulsman joined #gluster
07:43 ankitraj joined #gluster
07:46 arc0 joined #gluster
07:49 ivan_rossi joined #gluster
07:54 hchiramm joined #gluster
07:54 msvbhat joined #gluster
08:04 [diablo] joined #gluster
08:12 SeerKan joined #gluster
08:12 SeerKan Hi guys
08:12 SeerKan I have 100+ split brain situations, what is the best way to handle these in a mass way ?
08:13 Muthu joined #gluster
08:14 haomaiwang joined #gluster
08:17 pulli joined #gluster
08:21 sanoj joined #gluster
08:23 devyani joined #gluster
08:23 thomson joined #gluster
08:24 devyani7_ joined #gluster
08:28 mhulsman joined #gluster
08:29 thomson Hi! I have a problem: http://rgho.st/6LM2MNJlJ Can someone say me how to fix it? Thks.
08:29 glusterbot Title: sdf.PNG — RGhost — файлообменник (at rgho.st)
08:32 jri joined #gluster
08:36 devyani7_ joined #gluster
08:37 nishanth joined #gluster
08:37 anoopcs thomson, Which version of GlusterFS?
08:38 devyani7 joined #gluster
08:41 pulli joined #gluster
08:42 BitByteNybble110 joined #gluster
08:44 jtux joined #gluster
08:58 kotreshhr joined #gluster
09:00 ankitraj joined #gluster
09:02 Muthu joined #gluster
09:06 hackman joined #gluster
09:08 kramdoss_ joined #gluster
09:08 mhulsman joined #gluster
09:08 thomson 3.9
09:08 thomson @-anoopcs-, 3.9
09:09 karthik_us joined #gluster
09:10 kdhananjay joined #gluster
09:12 anoopcs thomson, Ok..Let me check.
09:12 flying joined #gluster
09:14 haomaiwang joined #gluster
09:15 prasanth joined #gluster
09:34 panina joined #gluster
09:37 Slashman joined #gluster
09:39 msvbhat joined #gluster
09:39 saybeano joined #gluster
09:41 mhulsman joined #gluster
09:41 Gnomethrower joined #gluster
09:41 sanoj joined #gluster
09:43 ashiq joined #gluster
09:46 derjohn_mobi joined #gluster
09:46 mahendratech joined #gluster
09:55 arc0 joined #gluster
09:57 satya4ever joined #gluster
10:10 sanoj joined #gluster
10:14 haomaiwang joined #gluster
10:15 sbulage joined #gluster
10:17 skoduri joined #gluster
10:26 haomaiwang joined #gluster
10:29 derjohn_mobi joined #gluster
10:43 itisravi joined #gluster
10:46 devyani7 joined #gluster
10:50 averi_work joined #gluster
10:50 skoduri joined #gluster
10:50 flomko joined #gluster
10:51 averi_work hello, with the recent 3.7.18 upgrade starting glusterfd no longer works (it complains about /etc/glusterfs/glusterfsd.vol being missing but that file has never been there)
10:52 averi_work any ideas?
10:52 flomko joined #gluster
10:52 averi_work is that file strictly needed now and /var/lib/glusterfs/vols/*.vol files ignored?
10:53 kshlm averi_work, Starting glusterd requires /etc/glusterfs/glusterd.vol to be present.
10:54 averi_work kshlm, we never had that file in there but it always worked like a charm, wondering how
10:54 averi_work :/
10:54 kshlm glusterd later starts other daemons (glusterfsds) reading the files from /var/lib/glusterd/vols/*
10:54 kshlm averi_work, That's not true. That file has been needed for glusterd to start.
10:55 averi_work kshlm, ok, so glusterfsd.vol can totally be a very simple configuration file with the vol names and the mount points in it as /var/lib/glusterd/vols/* is read later anyway
10:56 averi_work kshlm, i.e https://paste.fedoraprojec​t.org/502291/28097714/raw/
10:56 kshlm BTW, where did you get the packages for 3.7.18. It's not been announced yet.
10:57 averi_work kshlm, http://buildlogs.centos.org/cent​os/6/storage/x86_64/gluster-3.7/
10:57 glusterbot Title: Index of /centos/6/storage/x86_64/gluster-3.7CentOS Mirror (at buildlogs.centos.org)
10:58 averi_work kshlm, we were actually using http://download.gluster.org/pub/gluster/gl​usterfs/LATEST/CentOS/glusterfs-epel.repo, but that location went down at some point
10:59 averi_work kshlm, is there an official repository that superseded http://download.gluster.org?
10:59 glusterbot Title: Gluster.org Download Server (at download.gluster.org)
10:59 kshlm averi_work, It should be /etc/glusterfs/glusterd.vol, and it's specifically used for glusterd configuration. In most cases the default one is all you need.
10:59 averi_work kshlm, yeah, that's included by default
11:00 kshlm You don't need a /etc/glusterfs/glusterfsd.vol. That's an old (very old) thing, that was removed quite some time ago.
11:00 kshlm BTW, http://download.gluster.org/pub/gluste​r/glusterfs/LATEST/CentOS/EPEL.README explains what happened with download.gluster.org repo.
11:01 kshlm tl;dr: EPEL rpms are now provided by the CentOS Storage-SIG.
11:01 kshlm averi_work, If you could paste what you're trying to start, and the logs. I can help better.
11:01 atinm joined #gluster
11:01 kshlm @paste
11:01 glusterbot kshlm: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
11:02 averi_work kshlm, seems I was trying to start glusterfsd
11:02 averi_work which is started by glusterd on its own
11:02 kshlm Do you still have a glusterfsd service script?
11:02 averi_work yes
11:02 kshlm I thought it was removed from the builds.
11:03 averi_work kshlm, https://paste.fedoraprojec​t.org/502296/81281402/raw/
11:03 kshlm This is a bug with the packaging. I'll check what's up with it.
11:04 kshlm averi_work, Thanks for the info.
11:04 averi_work yw
11:04 kshlm averi_work, Just FYI, you should always be starting glusterd. Not anything else.
11:04 kshlm glusterd handles starting everything else.
11:04 averi_work kshlm, yeah, makes sense
11:05 averi_work the leftover of that old init script confused me
11:05 jiffin1 joined #gluster
11:06 kotreshhr left #gluster
11:07 averi_work kshlm, thanks a bunch!
11:07 magrawal joined #gluster
11:09 haomaiwang joined #gluster
11:14 haomaiwang joined #gluster
11:16 msvbhat joined #gluster
11:19 kramdoss_ joined #gluster
11:22 skoduri joined #gluster
11:48 jiffin1 joined #gluster
11:50 itisravi_ joined #gluster
11:52 shyam joined #gluster
12:02 mhulsman1 joined #gluster
12:02 atinm joined #gluster
12:05 mhulsman joined #gluster
12:14 haomaiwang joined #gluster
12:14 buvanesh_kumar joined #gluster
12:18 nbalacha joined #gluster
12:26 buvanesh_kumar joined #gluster
12:38 pulli joined #gluster
12:39 msvbhat joined #gluster
12:58 johnmilton joined #gluster
13:02 fyxim joined #gluster
13:14 haomaiwang joined #gluster
13:18 rwheeler joined #gluster
13:18 poornima_ joined #gluster
13:22 flomko Hi all! i'm trying create HA massive by replicated glusterfs. in /etc/fstab i have "glerver2:/link  /time/links    glusterfs       defaults,_netdev,fetch-attempts=10​,backupvolfile-server=glerver102" all work correctly, after shutdown glerver2 i have a trouble with access to volume, like 'transport endpoint not connected'
13:23 flomko maybe i have wrong parametr to 'backupvolfile-server' ?
13:24 flomko mount sow only glerver2:/link on /time/links type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072)
13:26 unclemarc joined #gluster
13:27 dnorman joined #gluster
13:29 susant joined #gluster
13:31 RustyB joined #gluster
13:34 flomko if i shutoff glerver1, all work correct
13:37 billputer joined #gluster
13:38 vbellur joined #gluster
13:39 riyas joined #gluster
13:41 kramdoss_ joined #gluster
13:42 sanoj joined #gluster
13:43 ankitraj joined #gluster
13:45 averi_work left #gluster
13:49 nbalacha joined #gluster
14:09 alvinstarr I am having problems with geo-syncing. I am seeing an I/O error but I don
14:10 alvinstarr don't seem to see dmesg errors. the logfile snippit is http://pastebin.centos.org/59346/  any hints on where to start looking?
14:11 alvinstarr Is there an "fsck" for gluster?
14:14 Asako joined #gluster
14:14 plarsen joined #gluster
14:14 haomaiwang joined #gluster
14:23 PotatoGim joined #gluster
14:26 vbellur joined #gluster
14:30 side_control joined #gluster
14:48 mahendratech joined #gluster
14:50 jkroon joined #gluster
14:56 bartden joined #gluster
14:57 bartden hi, i have a distributed gluster cluster of 3 nodes, when i want to do maintenance, should i first stop the volume before rebooting one of the nodes? Or will rebooting the node stop the volume as well? Running on Centos 6.
15:14 haomaiwang joined #gluster
15:14 arpu joined #gluster
15:18 buvanesh_kumar joined #gluster
15:24 SeerKan hi, I have a very strange problem, if I leave 1 machine it handles the load very well, load on machine is under 0.5 at all times, however if I add the second machine the load skyrockets to 5-15
15:24 SeerKan any ideas on what is happening and what i can do to make it better ?
15:25 squizzi joined #gluster
15:28 Gambit15 joined #gluster
15:33 ivan_rossi left #gluster
15:42 farhorizon joined #gluster
15:45 farhorizon joined #gluster
15:54 buvanesh_kumar joined #gluster
16:02 JoeJulian flomko: Check your log files for causes. My guess, you don't have quorum so the volume is stopped during your attempt to mount.
16:02 dnorman joined #gluster
16:05 snehring JoeJulian: Thanks. Seemed like a reasonable assumption, but wanted to get a second opinion.
16:06 JoeJulian alvinstarr: Check heal...info for both your master and slave. My initial guess is split-brain.
16:06 JoeJulian snehring: :)
16:07 JoeJulian SeerKan: Assuming your volume is replicated, the obvious answer is that when you bring up the replica, self-heal begins.
16:07 SeerKan JoeJulian: that's true, but the same thing happened when they were online for a long time and no self healing was running
16:08 SeerKan it just spikes to high load at random times... cpu is at 60-70% tops and io is 0 or very low
16:09 JoeJulian And there's nothing in the logs about it?
16:09 SeerKan adding one extra EBS partition to each machine and adding them as brick2 would help ?
16:09 SeerKan no, no errors in logs
16:10 alvinstarr JoeJulian: I get 0 entries in split-brain
16:10 JoeJulian I wouldn't expect errors, but I would expect some sort of log message if something is happening that's not triggered by a client.
16:10 JoeJulian alvinstarr: selinux?
16:11 alvinstarr JoeJulian: SELINUX=disabled
16:11 annettec joined #gluster
16:13 JoeJulian alvinstarr: Then I would check the client and brick logs.
16:14 haomaiwang joined #gluster
16:17 alvinstarr JoeJulian: I am seeing "Numerical result out of range" and "No data available"
16:18 jiffin joined #gluster
16:19 JoeJulian Could you share?
16:21 alvinstarr JoeJulian: what would you like to see?
16:21 JoeJulian The log info with those strings.
16:21 wushudoin joined #gluster
16:23 alvinstarr JoeJulian:   here are the errors http://pastebin.centos.org/59356/   I could get the full logs if it will help but they are big.
16:26 JoeJulian Ah, ok, that makes sense then. I bet that above all the "failed to submit message" errors there's a disconnect.
16:27 JoeJulian What version is this?
16:28 alvinstarr glusterfs-3.7.11-1.el7
16:32 JoeJulian alvinstarr: Do any of your directories have more than 32767 entries by chance?
16:32 alvinstarr possibly. I would have to look
16:34 JoeJulian I'm wondering if this bug found by Nokia might be related: http://www.gluster.org/pipermail/glu​ster-devel/2016-December/051664.html
16:34 glusterbot Title: [Gluster-devel] "du" a large count files in a directory casue mounted glusterfs filesystem coredump (at www.gluster.org)
16:34 squizzi joined #gluster
16:37 alvinstarr In the volume that has the problem there the deveopers have tried to keep away from the big directory problems but they could have missed something.
16:40 JoeJulian Otherwise, I would look for ping timeouts. A problem that I thought might be people choosing too-small VM instances may actually be a bug. If you find timeouts, a good gcore dump of a glusterfsd that's not responding and, possibly, a state dump would be invaluable to the developers.
16:41 JoeJulian But now I have to go help my daughter build a snowman before it rains all our snow away. :)
16:41 alvinstarr have fun.
16:42 BitByteNybble110 joined #gluster
16:42 jkroon joined #gluster
16:49 Norky joined #gluster
16:52 sanoj joined #gluster
16:55 R4yTr4cer joined #gluster
17:03 krink joined #gluster
17:04 farhorizon joined #gluster
17:06 Gambit15 Hey, JoeJulian, I've got 4 servers in rep 3 (2 + 1), and "heal info" is showing there's a discrepancy between a brick on one of the servers & its arbiter
17:08 Gambit15 info split-brain shows no issues, but this 4GB file has been in this state since yesterday now :/
17:08 Gambit15 Any ideas on how to get more info about what the issue is & how to resolve it?
17:08 Gambit15 The info output is a bit terse. It reports a problem, but it doesn't say what...
17:11 mlg9000 joined #gluster
17:12 gem joined #gluster
17:12 farhorizon joined #gluster
17:13 farhorizon joined #gluster
17:14 farhorizon joined #gluster
17:14 haomaiwang joined #gluster
17:14 krink joined #gluster
17:14 mlg9000 Hi all, I'm looking at setting up gluster to host VM images for ovirt.  I was wondering in anyone that's already doing something like that had a few minutes to discuss their experiences/lessons learned?
17:25 ivan_rossi1 joined #gluster
17:27 chris4 mlg9000, if you use debian make sure you use the repos from gluster.org.  The version in the debian repos is quite old
17:31 jbrooks joined #gluster
17:32 kusznir Is it possible to start with a 1-node gluster "cluster" that will later be expanded into a 3-node cluster with replication (only)?
17:36 Gambit15 mlg9000, me
17:37 Gambit15 I'm going AFK for about 30 minutes, but feel free to send me any queries & I'll respond when I get back
17:38 kusznir Gambit15: I'm also wanting to do the same (gluster for ovirt).  I found this guide: http://www.ovirt.org/blog/2016/08/up-and-r​unning-with-ovirt-4-0-and-gluster-storage/
17:38 glusterbot Title: Up and Running with oVirt 4.0 and Gluster Storage — oVirt (at www.ovirt.org)
17:38 kusznir I'm looking at starting as small of a cluster as I can, with opportunity to scale up.
17:39 bwerthmann joined #gluster
17:44 mlg9000 chris4: This would all be on CentOS
17:45 kusznir Gambit15: Does that guide look good and complete, or are there other gotcha's I should be aware of?
17:45 kusznir (oh, yea, I'm building all this on CentOS).
17:46 chris4 I got no experience with gluster on CentOS
17:47 chris4 I tried it on opensuse and the dependencies were a mess so I went to debian
17:47 kusznir I thought RedHat subsidized/helped with gluster development???  If so, it should be in current CentOS releases (I'd think...)
17:48 mlg9000 debian the support cycle is too short
17:48 mlg9000 yes gluster is a RH product
17:49 mlg9000 same with OVirt
17:49 mlg9000 =RHEV
17:55 Gambit15 kusznir, mlg9000, whilst we're typically an Ubuntu environment here, I chose CentOS 7 for the hypervisors as I expected that to have better support
17:56 Gambit15 RedHat's RHEV documentation is fantastic & very in-depth, covering both gluster & ovirt
17:56 Gambit15 (under their own RH names, of course)
17:58 kusznir Gambit15: what are RedHat's names?  I was getting confused tyring to find the "same" technology from their list.
17:58 Gambit15 kusznir, I used a couple of sources to get my first install up & running, which must have been just before that blog post! However their guide was a big part of it. That ovirt-gluster.conf & storage.conf thing is new
17:59 Gambit15 Gluster: https://access.redhat.com/docum​entation/en-US/Red_Hat_Storage/
17:59 glusterbot Title: Product Documentation for Red Hat Gluster Storage - Red Hat Customer Portal (at access.redhat.com)
17:59 Gambit15 It's faaar more in-depth than the official gluster docs
18:00 mlg9000 Gambit15: So for our environment we are looking at scaling up to 300-500 VM (per datacenter) all on gluster.  For hardware we are looking to use/reuse our Dell R630 1U servers and add 10Gbit networking, which would be both VM hosts and gluster ("hyperconverged").  I'm guessing that equals ~30 physical servers.  We'd use SSD's for everything replica 3 (no raid) managed by heketi with one zone per rack...  I think 3 or 6 bricks per server makes
18:00 Gambit15 Again, I used a combination of the both to get my head around everything
18:00 mlg9000 does that seem sane?
18:02 alvinstarr JoeJulian: I hope the snow person project went well.
18:02 Gambit15 In my case I'm using one brick per server & providing the bricks via LVM & physical RAID
18:03 Gambit15 I'm using physical RAID just to make it less hassle when dealing with failing drives, but you don't need it
18:03 alvinstarr JoeJulian: The largest directory tree has 1153 entries so there is nothing bigger 32K in the volume in question.
18:03 MidlandTroy joined #gluster
18:04 mlg9000 Gambit15: that wastes a lot of disk though
18:05 mlg9000 and you'd need to add 3 servers at a time
18:05 Gambit15 Our 10G kit is on order, however I'm still getting decent speeds with bonded 1Gbit ports, a dedicated storage network & replica 3, arbiter 1 (meaing only 2 real copies of each file, not 3)
18:05 alvinstarr JoeJulian: I get periodic  "... D [rpc-clnt-ping.c:281:rpc_clnt_start_ping] 0-gfchangelog: ping timeout is 0, returning" I assume these are not errors because they are not tagged with a W or E.
18:06 mlg9000 Gambit15: how many VM do you run and how big are they?
18:07 Gambit15 That's with 6 servers & 50 VMs (still loads of capacity left)
18:07 mlg9000 the VM's are hosted on the gluster nodes?
18:08 Gambit15 Luckily the network has still held together reasonably, but it's puching it
18:08 Gambit15 Yes
18:09 Gambit15 I've got a new shipment of chunky proliants sat waiting for installation, but I'm currently running everything on Dell R710s which I filled up with SATA HDDs
18:10 edong23 joined #gluster
18:12 Gambit15 7.5k RPM even! We're not dealing with ridiculous I/O demands & the physical RAID + the effective RAID provided by gluster is serving very well. The VMs get about 300Mbps r/w
18:13 d0nn1e joined #gluster
18:14 mlg9000 IOPS is more my concern
18:14 haomaiwang joined #gluster
18:14 Gambit15 That said, everything's also tweaked for a VM environment. No swap in the VMs & caching on the external proxies
18:15 Gambit15 If you've got a good network, the best way to improve I/O with gluster is to increase the number of peers/servers
18:15 mlg9000 that's a good point not to swap
18:16 Gambit15 I've read some whitepapers with silly I/O speeds
18:16 Gambit15 Rackspace setup an environment with around 85 peers & posted the results on their blog
18:17 mlg9000 I didn't see any performance improvement going from a 3 node/brick environment to 9/9
18:17 mlg9000 which I didn't expect
18:17 mlg9000 I'm just testing with some old R610's
18:18 Gambit15 Look into gluster's caching mechanisms. It can utilise SSDs very well without requiring *everything* to run on them
18:18 Gambit15 Gets you a far better $/GB
18:19 mlg9000 the prices for SSD's is so low these days it makes sense to just standardize on them
18:19 Gambit15 There are loads of tweaks you can make with gluster to optimise performance, although I've not had the need to go that far yet
18:20 mlg9000 do you do sharding?
18:21 Gambit15 Depends on your needs & location. I'm currently working in Brazil & the price of meeting our needs purely with SSDs would be ridiculous. COTS SATA drives are doing the job nicely for us at the moment, and it's cheap as chips to buy, replace & upgrade
18:22 Gambit15 I had issues with sharding. As soon as I enabled it, my I/O went down to 12Kbps!
18:23 Gambit15 I've seen enough conversation in the mailing lists to know that others don't have that problem though, so it's next on the list to revisit
18:25 Gambit15 My bricks are large enough not to need it yet though, and it gives the benefit of having entire VHDs on each brick. In the case something went belly up, that makes it far easier to recuperate from before having to turn to backups
18:26 Gambit15 BTW, WRT backups, I'm using geo-rep onto large ZFS storage boxes
18:26 Gambit15 Then ZFS snapshots from there
18:26 Gambit15 Right...back in 30 mins
18:46 mlg9000 My next question would be about volumes: how many do you have?  I can't see a reason for needing more that 2, one for the engine and the other for VM disks
19:04 mlg9000 left #gluster
19:06 edong23 joined #gluster
19:12 k4n0 joined #gluster
19:14 haomaiwang joined #gluster
19:20 shaunm joined #gluster
19:21 Utoxin Seeing something odd with my gluster that I recently added bricks to. Some (maybe all?) of my clients are failing to see some of the subvolumes. I remember seeing something about not needing anything in the glusterd.vol with recent versions of gluster, because it shares the data when it connects. Could a glusterd.vol with partial data in it be causing a problem?
19:23 Utoxin pastebin
19:23 Utoxin ... Bah. Thought that was the trigger to get the link.
19:24 krink joined #gluster
19:25 Utoxin Relevant log entries from the clients seeing the problem: http://pastebin.centos.org/59366/
19:34 mhulsman joined #gluster
19:45 Utoxin Hmmm. Removing that didn't help. (It appears to be getting ignored anyway.)
20:07 masber joined #gluster
20:14 haomaiwang joined #gluster
20:46 sanoj joined #gluster
20:50 jkroon joined #gluster
20:58 masuberu joined #gluster
21:07 DV joined #gluster
21:08 marlinc joined #gluster
21:14 haomaiwang joined #gluster
21:39 derjohn_mobi joined #gluster
21:45 farhoriz_ joined #gluster
22:05 DV joined #gluster
22:14 haomaiwang joined #gluster
22:48 jbrooks joined #gluster
22:49 lanning joined #gluster
22:51 dnorman joined #gluster
22:57 annettec joined #gluster
23:00 hackman joined #gluster
23:14 haomaiwang joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary