Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 melanor9 joined #gluster
00:23 sjoeboo joined #gluster
01:36 kevein joined #gluster
02:06 sashko joined #gluster
02:45 sashko joined #gluster
02:47 polenta joined #gluster
02:54 nueces joined #gluster
03:05 shylesh joined #gluster
03:06 overclk joined #gluster
03:09 overclk joined #gluster
03:15 bharata joined #gluster
03:47 raven-np joined #gluster
03:49 overclk joined #gluster
03:59 raven-np joined #gluster
04:00 hagarth joined #gluster
04:00 bulde joined #gluster
04:08 sripathi joined #gluster
04:16 glusterbot New news from newglusterbugs: [Bug 906763] SSL code does not use OpenSSL multi-threading interface <http://goo.gl/JXYw6>
04:26 bala joined #gluster
04:32 lala joined #gluster
04:38 Humble joined #gluster
04:56 rastar joined #gluster
05:05 melanor9 joined #gluster
05:09 deepakcs joined #gluster
05:10 vpshastry joined #gluster
05:31 raghu joined #gluster
05:39 ramkrsna joined #gluster
05:39 ramkrsna joined #gluster
05:56 sripathi1 joined #gluster
05:58 sripathi1 joined #gluster
06:40 Nevan joined #gluster
06:41 vimal joined #gluster
06:41 sripathi joined #gluster
06:41 sripathi joined #gluster
06:50 sripathi joined #gluster
06:55 _br_ joined #gluster
06:57 melanor9 joined #gluster
07:00 deepakcs joined #gluster
07:03 jjnash left #gluster
07:10 errstr joined #gluster
07:10 guigui3 joined #gluster
07:33 sripathi joined #gluster
07:44 venkat_ joined #gluster
07:45 ctria joined #gluster
07:46 ben__ joined #gluster
07:50 venkat_ joined #gluster
07:52 melanor9 joined #gluster
07:54 puebele joined #gluster
08:08 sripathi joined #gluster
08:12 Humble joined #gluster
08:14 puebele joined #gluster
08:22 mohankumar joined #gluster
08:32 tjikkun_work joined #gluster
08:44 sashko joined #gluster
08:45 lala_ joined #gluster
08:47 JuanBre joined #gluster
08:52 sripathi joined #gluster
08:58 sripathi1 joined #gluster
08:59 sripathi joined #gluster
09:03 hybrid512 joined #gluster
09:07 nb-ben joined #gluster
09:08 Norky joined #gluster
09:21 sripathi joined #gluster
09:27 Staples84 joined #gluster
09:27 w3lly joined #gluster
09:32 lala_ joined #gluster
09:43 glusterbot New news from resolvedglusterbugs: [Bug 765267] Problem implementing Gluster Quotas <http://goo.gl/iMZBe> || [Bug 765153] quota: directory limit for specified quota gets crossed <http://goo.gl/vBLL8>
09:53 melanor91 joined #gluster
09:58 tryggvil joined #gluster
10:02 _benoit_ joined #gluster
10:15 rcheleguini joined #gluster
10:26 manik joined #gluster
10:30 Joda joined #gluster
10:34 venkat_ joined #gluster
10:35 vimal joined #gluster
10:38 shireesh joined #gluster
10:42 cyberbootje joined #gluster
10:54 dobber joined #gluster
10:57 ekuric joined #gluster
11:13 20WABXUC7 joined #gluster
11:13 glusterbot New news from resolvedglusterbugs: [Bug 764429] Quota: add-brick creates the size go awkward, though it was perfect earlier <http://goo.gl/ioYUO> || [Bug 802910] quota: after rebalance quota list keeps displaying different value <http://goo.gl/gdMtf> || [Bug 821725] quota: brick process kill allows quota limit cross <http://goo.gl/jjUkK>
11:14 20WABXUC7 hi guys when im mounting gluster in ubuntu. it said mount failed
11:14 20WABXUC7 this is what i use sudo mount.glusterfs gluster:/gv0 /mnt/glusterfs/
11:28 ekuric left #gluster
11:30 sripathi joined #gluster
11:31 lh joined #gluster
11:31 lh joined #gluster
11:34 overclk joined #gluster
11:34 duerF joined #gluster
11:39 tryggvil joined #gluster
11:43 glusterbot New news from resolvedglusterbugs: [Bug 795735] [6d19136de7af9135dd23662f18c3ee544a2888da]: df does not show quota limit for clients which are mounted before enabling quota <http://goo.gl/Uwbgy>
11:45 raj_ joined #gluster
11:53 vikumar joined #gluster
11:54 raj_ left #gluster
11:59 andrei joined #gluster
12:03 lh joined #gluster
12:13 andrei hello guys
12:14 andrei I was wondering if someone could help me with an issue that I have when I add a new brick?
12:15 andrei I've got a single storage server and I was trying to add a new brick to it so that they replicate the data
12:15 andrei adding the brick seems to work okay and I can mount the gluster fs on the client side with both servers in replicate mode
12:16 raven-np joined #gluster
12:17 andrei i've started the self heal process by doing ls -laR on the mount point.
12:17 andrei and I can see the second server being populated with data
12:18 andrei however, a few moments after starting the ls -laR on the mountpoint it hanged and stopped responding to any of the directory listing or copy commands
12:18 andrei the same happened on all clients that had mounted the glusterfs
12:23 dobber joined #gluster
12:26 rwheeler joined #gluster
12:26 melanor9 joined #gluster
12:27 jdarcy_ joined #gluster
12:27 sashko joined #gluster
12:28 ShaunR joined #gluster
12:28 ShaunR joined #gluster
12:29 chacken1 joined #gluster
12:39 bala joined #gluster
12:43 sripathi joined #gluster
12:44 glusterbot New news from resolvedglusterbugs: [Bug 841617] after geo-replication start: glusterfs process eats memory until OOM kills it <http://goo.gl/CBD2r> || [Bug 764062] Oplock problem with samba <http://goo.gl/S88b5>
13:02 guigui3 joined #gluster
13:03 edward joined #gluster
13:11 mynameisbruce_ joined #gluster
13:14 bala joined #gluster
13:14 hagarth joined #gluster
13:16 dustint_ joined #gluster
13:16 dustint joined #gluster
13:17 bala joined #gluster
13:21 rastar joined #gluster
13:42 plarsen joined #gluster
13:51 aliguori joined #gluster
13:55 rgustafs joined #gluster
14:06 purpawork joined #gluster
14:06 furkaboo joined #gluster
14:07 vimal joined #gluster
14:14 jack joined #gluster
14:15 tryggvil joined #gluster
14:23 rastar joined #gluster
14:28 mohankumar joined #gluster
14:36 guigui3 joined #gluster
14:42 melanor91 joined #gluster
14:45 rwheeler joined #gluster
14:58 Ryan_Lane joined #gluster
14:58 vikumar joined #gluster
15:00 stopbit joined #gluster
15:03 chouchins joined #gluster
15:06 VSpike joined #gluster
15:14 deepakcs joined #gluster
15:15 Ryan_Lane I'm still having issues with some volumes not coming up when I restart gluster
15:15 Ryan_Lane http://dpaste.com/908493/
15:15 glusterbot Title: dpaste: #908493 (at dpaste.com)
15:15 Ryan_Lane that's a paste from one of the volumes that is failing on the brick it is failing on
15:15 wushudoin joined #gluster
15:19 w3lly joined #gluster
15:26 balunasj joined #gluster
15:26 elyograg Ryan_Lane: just looked.  i'm not well-versed in these things, so apologies if I can't help.  Did you remove the hostname/IP from line 4, or is it really blank as the log shows?  If it's really blank, that seems very odd.  If it's not blank, then this might be a firewall or DNS issue.
15:26 Ryan_Lane it's blank
15:26 bugs_ joined #gluster
15:26 Ryan_Lane it's not a firewall issue, the volume process doesn't start on the brick
15:26 Ryan_Lane also, no firewall on the nodes
15:27 elyograg kkeithley: did you see my new message on the mailing list? I can't get gluster UFO to work even on CentOS.
15:27 elyograg Ryan_Lane: does 'gluster volume info' and 'gluster volume status' look OK?
15:27 Ryan_Lane gluster volume info is fine
15:27 elyograg and 'gluster peer status' on all servers?
15:27 Ryan_Lane gluster volume status shows processes down
15:27 Ryan_Lane also fine
15:28 Ryan_Lane though one server shows an IP rather than a hostname
15:28 Ryan_Lane on all nodes
15:28 elyograg the way to fix that is to probe that peer from one of the other peers.  it's normal for the first peer to be an IP until you do that.
15:28 Ryan_Lane could that be related to this issue, though?
15:29 Ryan_Lane I have about 350 volumes
15:29 Ryan_Lane not all of them have this problem
15:29 elyograg i don't know.  i would expect an IP to work better.  are all servers on the same ip subnet and LAN?
15:29 Ryan_Lane yep
15:29 Ryan_Lane I don't think it's a connectivity issue
15:30 Ryan_Lane since the hostname doesn't show in the log
15:30 elyograg that's a lot of volumes!  if you've got other volumes on the same servers that are working fine, that eliminates a whole list of possible problems, I think.
15:31 Ryan_Lane yeah, it's just specific volumes having problems and only on specific hosts
15:31 Ryan_Lane if I use gluster volume start <volume-name> force, it comes up
15:32 Ryan_Lane I find that to be especially odd
15:32 elyograg If your volumes are created using hostnames, I think it's probably a good idea to fix that peer/IP thing, but without a clear OK from a channel regular, please don't do it on my recommendation.  I would hate to have your other volumes get screwed up.
15:32 Ryan_Lane the volumes do use hostnames
15:33 elyograg I've heard JoeJulian tell people to use 'volume start force
15:33 elyograg ' to fix stubborn problems.
15:33 Ryan_Lane well, it's not a great solution
15:34 Ryan_Lane every time I reboot a node I need to force start like 60 volumes
15:34 Ryan_Lane and it makes the CPU and waitio go insane
15:34 elyograg that's a problem.
15:34 elyograg I'd say you should file a bug if you haven't already.
15:34 glusterbot http://goo.gl/UUuCq
15:34 elyograg glusterbot: thanks!
15:34 glusterbot elyograg: I do not know about 'thanks!', but I do know about these similar topics: 'thanks'
15:34 elyograg heh.
15:36 delete joined #gluster
15:37 delete hi, I am using geo replication between server a (master) and server b (slave). When I create files on server b they don't sync to the master is that normal? how can I change that behavior?
15:39 delete the sync actually removes the new changes on server b
15:39 delete I need some sort of async mirror between the server
15:39 elyograg delete: georeplication is one way.  master to slave.
15:40 delete do you know any 2 way solution?
15:40 elyograg delete: there's talk of making it bidirectional in a future version.
15:42 elyograg delete: according to the 3.4 planning docs, the plan is for 3.4 to have multi-master georeplication. http://www.gluster.org/community/d​ocumentation/index.php/Planning34
15:42 glusterbot <http://goo.gl/4yWrh> (at www.gluster.org)
15:43 elyograg there is a QA release of 3.4 available, which I imagine comes with all the usual "not our responsibility if you choose to use this in production" caveats.
15:43 rwheeler joined #gluster
15:45 vpshastry joined #gluster
15:54 rastar joined #gluster
16:01 delete elyograg: thank you, I switched to unison, that is all I need so far
16:05 sjoeboo joined #gluster
16:08 flakrat joined #gluster
16:11 nueces joined #gluster
16:12 andrei hello guys
16:12 andrei I was wondering if someone could help me with an issue that I have when I add a new brick?
16:12 andrei I've got a single storage server and I was trying to add a new brick to it so that they replicate the data
16:12 andrei adding the brick seems to work okay and I can mount the gluster fs on the client side with both servers in replicate mode
16:12 andrei i've started the self heal process by doing ls -laR on the mount point.
16:12 andrei and I can see the second server being populated with data
16:12 andrei however, a few moments after starting the ls -laR on the mountpoint it hanged and stopped responding to any of the directory listing or copy commands
16:13 andrei the same happened on all clients that had mounted the glusterfs
16:13 Norky joined #gluster
16:21 nightwalk joined #gluster
16:22 jjnash joined #gluster
16:23 amccloud joined #gluster
16:25 noob2 joined #gluster
16:26 luckybambu joined #gluster
16:26 daMaestro joined #gluster
16:27 noob2 has anyone else seen gluster nfs traffic go over port 760?
16:29 elyograg noob2: Not an expert, but I think that gluster just plugs into the standard RPC/NFS infrastructure, so it can/will go over any port that standard NFS will.
16:30 noob2 ok
16:30 noob2 i have a rhel5.1 client that is using nfs that seems to talk over port 760 but it's the only one that does that
16:30 noob2 everyone else is 2049
16:31 noob2 the source port on the client is a 38,000 port but the dest on gluster is 760
16:46 ultrabizweb joined #gluster
16:48 w3lly joined #gluster
16:48 glusterbot New news from newglusterbugs: [Bug 907540] Gluster fails to start many volumes <http://goo.gl/jNFB3>
16:52 Norky joined #gluster
16:56 sashko joined #gluster
16:59 Humble joined #gluster
17:05 Norky joined #gluster
17:05 lala joined #gluster
17:10 Mo___ joined #gluster
17:13 melanor9 joined #gluster
17:13 dustint joined #gluster
17:17 neofob joined #gluster
17:19 hagarth joined #gluster
17:41 sjoeboo_ joined #gluster
17:41 samppah joined #gluster
17:41 melanor9 joined #gluster
18:03 flakrat joined #gluster
18:06 jbrooks joined #gluster
18:17 hateya joined #gluster
18:23 jbrooks joined #gluster
18:25 sjoeboo joined #gluster
18:34 nueces joined #gluster
18:47 jds2001 joined #gluster
19:13 melanor9 joined #gluster
19:20 plarsen joined #gluster
19:37 zaitcev joined #gluster
19:44 tryggvil joined #gluster
19:44 nca_ joined #gluster
19:51 partner boy i'm puzzled on how to setup the hw.. wish there was one "over the others" option but i'm yet to find it..
19:56 semiosis what?
19:57 JoeJulian Just do something and be confident about it. One good thing about being on the bleeding edge is that you can't be doing it the wrong way because there is no right way.
19:57 partner just thinking out loud. i'll probably get 3 servers with 12 disks each. but i still haven't got a clue on how to arrange the setup (ie. not a technical problem)
19:58 partner i don't want to just "do something" on production, i want to have a plan :)
19:58 JoeJulian pfft... chicken. ;)
19:58 partner hehe
19:59 partner alright, "official community recommendations" taken into account, thanks :D
20:00 JoeJulian It sounded to me like you already had a plan. You didn't want to lose any of your drive space so brick-per-disk seems obvious.
20:00 elyograg partner: general advice from me: have a number of server's that's a multiple of your replication factor.
20:00 elyograg it's not necessary to do this, but it sure makes locating the bricks easier. :)
20:00 partner well yes BUT i haven't tested it out at all so i'm not currently comfortable with it..
20:01 JoeJulian Put it together and start testing. When management says, "Oh, you've got it working" and puts it into production even though your testing isn't complete, you have an "I told you so" opportunity.
20:02 partner JoeJulian: that's what happened with my testing instances..
20:02 JoeJulian If you just keep hmming and hawing, it's going to go into production with no testing at all.
20:03 partner sure, i've got the quotes already so the order is going to be placed in few days
20:05 semiosis JoeJulian: i fight that battle every day
20:05 partner i am obviously targetting for 0 downtime infra but the price is quite high for such so maybe i just need to relax _my_ requirements.. :)
20:05 JoeJulian btw... hardware going into production before testing is completed is an industry-wide problem. Just get used to it.
20:06 JoeJulian target 0 downtime, plan for total system failure.
20:06 partner the usual
20:07 Staples84 joined #gluster
20:10 penglish Ola
20:11 partner "i don't often test my code but when i do i do it in production"
20:11 penglish I'm wondering if there's a convenient way to get _quick_ disk usage updates on a given subdir in a gluster cluster
20:11 penglish I say this without having looked at how filesystem quotas might interact with gluster.. that's the way I achieve the same thing on an Isilon system
20:12 penglish derp. I see that quotas can be set on gluster itself
20:13 flrichar I keep thinking about how I will eventually use gluster
20:14 flrichar I consider it "lvm on the network"
20:14 JoeJulian It's the cell phone dilemma. You can put it off forever waiting for that new feature you have to have (that you didn't need last month). :)
20:16 penglish JoeJulian: or in my case, waiting to drop your cell phone that one last time, such that it actually completely stops working. :-D
20:21 JoeJulian penglish: I'm that way with my Evo 4G at the moment myself.
20:21 JoeJulian Damned thing just keeps working!
20:22 penglish I only got rid of my Palm Treo 680 last year
20:25 flrichar I had my nexus s for 2 years
20:25 flrichar with a broken power button for most of the last year
20:35 penglish anyone: what are some good things to monitor (ie: page and/or email) on a gluster cluster
20:35 penglish does anyone have a canned list?
20:35 penglish Obviously, I'll want to know when a brick fails - which could be glusterd dying on the brick, the brick itself dying (monitor via ssh & icmp)
20:36 penglish Or when there's a RAID event on a brick (I already have monitoring coverage for this)
20:36 semiosis i use nagios to watch the process table, brick/client mounts, and log files
20:37 semiosis see my ,,(puppet) module
20:37 glusterbot (#1) https://github.com/semiosis/puppet-gluster, or (#2) https://github.com/purpleidea/puppet-gluster
20:37 semiosis not perfect but quite effective
20:37 partner back from hmming and hawing and lots of crying for you mocking me.. not :)
20:38 penglish semiosis: thank you very much! I'll look that over
20:39 semiosis yw
20:41 flrichar all in all it's just another brick in the cluster
20:59 JoeJulian hehe
20:59 VeggieMeat_ joined #gluster
21:12 randomcamel left #gluster
21:19 glusterbot New news from newglusterbugs: [Bug 831699] Handle multiple networks better <http://goo.gl/SrRIn>
21:23 ctria joined #gluster
21:41 polenta joined #gluster
21:46 _br_ joined #gluster
21:48 _br_- joined #gluster
21:57 _br_ joined #gluster
22:10 _br_ joined #gluster
22:13 _br_ joined #gluster
22:14 _br_ joined #gluster
22:14 pandagerbil joined #gluster
22:24 jbrooks joined #gluster
22:24 melanor9 joined #gluster
22:26 ackjewt joined #gluster
22:29 pandagerbil joined #gluster
22:37 redsolar joined #gluster
22:55 andrei joined #gluster
22:56 andrei hello guys
22:56 andrei wondering if someone could help me with some issues I am having adding a new brick?
22:57 JoeJulian Great... now I'm wondering too...
23:00 JoeJulian My curiosity is waning....
23:04 JoeJulian andrei: I've come to the conclusion that nobody will be able to help you (at least with your current level of participation in the discussion). I don't think anyone is sufficiently psychic.
23:09 noob2 joined #gluster
23:13 andrei JoeJulian: )))
23:14 andrei I've got a single storage server and I was trying to add a new brick to it so that they replicate the data
23:14 andrei adding the brick seems to work okay and I can mount the gluster fs on the client side with both servers in replicate mode
23:14 andrei i've started the self heal process by doing ls -laR on the mount point.
23:14 andrei and I can see the second server being populated with data
23:14 andrei however, a few moments after starting the ls -laR on the mountpoint it hanged and stopped responding to any of the directory listing or copy commands
23:14 andrei the same happened on all clients that had mounted the glusterfs
23:14 pandagerbil joined #gluster
23:14 JoeJulian Any errors in the logs?
23:14 JoeJulian What version?
23:14 andrei 3.3.0
23:15 andrei on the servers
23:15 andrei and I think 3.3.1 on the clients
23:15 JoeJulian Upgrade. I think that one's fixed.
23:15 JoeJulian Also, always upgrade your servers first.
23:15 andrei JoeJulian: i can't update the servers to 3.3.1 as it will break rdma
23:15 JoeJulian why?
23:15 andrei i've tried it yesterday
23:15 JoeJulian Do you have a bug report?
23:15 andrei 3.3.1 doesn't work with rdma
23:16 andrei spent like a day trying to figure this out
23:16 andrei ended up installing 3.3.0 and it worked like a charm
23:16 andrei not yet, i've not done the report yet as I was debugging this issue until 5am and didn't get much sleep
23:16 andrei however, I will do that shortly
23:16 JoeJulian been there...
23:17 andrei JoeJulian: can you tell me what should be the behaviour during self healing?
23:17 andrei should all clients work okay?
23:17 andrei apart from some performance issues?
23:17 JoeJulian Personally, I just to "gluster volume heal $vol full" since 3.3.
23:18 andrei because I am planning to give it another try tonight
23:18 JoeJulian I suspect it's a bug that I was encountering in 3.3.0 (and I think I may have reported it) but it's fixed in 3.3.1.
23:19 andrei i can shutdown the vms and try
23:19 andrei the trouble is I've got around 5tb of data that needs migrating over to the second server
23:19 andrei and it will take ages and I can't really have the frozen mountpoints for that long )))
23:20 JoeJulian I know the feeling...
23:21 JoeJulian Looks like the only rdma differences between 3.3.0 and 3.3.1 is logging.
23:22 andrei JoeJulian: i've done some debugging and this is what happens
23:22 andrei when you setup rdma only volume it uses port 24009 instead of 24008 to communicate
23:23 andrei however, when you mount the share from the clients and monitor the traffic it makes connections to 24008
23:23 andrei there are no attempts to connect to 24009 at all
23:24 andrei so it just sits there for ages trying to connect...
23:24 andrei perhaps during the creation of rdma only volume the config files are not properly created to indicate which port to use
23:25 melanor9 joined #gluster
23:25 JoeJulian 24008 is the glusterd port
23:26 andrei and since i've not had any experience with the old style glusterfs where you have to create configs, i pretty much rely on the cli
23:26 andrei isn't 24007 a glusterd port?
23:27 JoeJulian @ports
23:27 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
23:27 andrei actually, perhaps the issue is with 3.3.1 is that it doesn't open 24008 port
23:27 andrei because on 3.3.1 i only get 24007 and 24009
23:28 andrei without 24008
23:28 JoeJulian And I just double checked the source. Definitely 24008
23:28 JoeJulian So... which distro?
23:28 andrei servers are ubuntu 12.04 lts
23:28 andrei clients are centos 6.3
23:28 andrei i've got 2 servers and 2 clients
23:29 JoeJulian And you're using the ,,(ppa) for ubuntu?
23:29 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
23:29 andrei i've temporarily disabled glusterfs on the second server and didn't wait for it to finish the heal process as I need the mountpoints
23:29 sjoeboo joined #gluster
23:30 JoeJulian Sounds good. Firewall it and install 3.3.1. Truncate /var/log/glusterfs/etc-glusterfs-glusterd.vol.log and start glusterd and [fd]paste the log.
23:31 JoeJulian As long as it's firewalled off we can do that without interfering with your clients.
23:32 andrei the trouble is i've got a live environment and I do not want to mess up the existing data
23:32 andrei if I install 3.3.1 on top of 3.3.0
23:32 andrei would it work okay and not mess up my data?
23:32 JoeJulian Doesn't touch the data and can be rolled back without issue.
23:33 andrei so, if 3.3.1 doesn't work, i can simply remove the volume, remove the .glusterfs folder and do the settattr business that you mention in your block and roll back 3.3.0?
23:33 JoeJulian If 3.3.1 doesn't work, doesn't apt have the ability to downgrade?
23:34 * JoeJulian uses rpm and yum...
23:35 JoeJulian Hmm... otoh I don't see a 3.3.0 deb so I'm not sure how you would downgrade again.
23:36 andrei JoeJulian: yes, it does
23:36 andrei that's right
23:36 andrei i was just about to say that the official ppa doesn't have the 3.3.0 option
23:36 andrei i've used one of his/her old repos to get the 3.3.0 debian
23:36 andrei which i've used in the past for testing
23:37 JoeJulian I don't suppose these are VMs you can snapshot...
23:37 JoeJulian No....
23:37 andrei )))
23:37 andrei nope, they are live servers )))
23:37 andrei with live data and clients
23:38 JoeJulian Well, if it was up to me I'd say f it and just upgrade, but I've looked at the code and there's no significant code changes. Between myself and semiosis (if it's a package problem) we should be able to solve this.
23:39 andrei JoeJulian: are you going to be online in the next couple of hours?
23:39 andrei in case if things go sour?
23:39 JoeJulian yep
23:40 RicardoSSP joined #gluster
23:41 andrei thanks!
23:41 andrei i will shutdown all vms now and try to upgrade
23:42 shawns|work joined #gluster
23:47 grade_ joined #gluster
23:49 rwheeler joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary