Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 badone kkeithley: ping
00:10 greylurk joined #gluster
00:46 sadsfae joined #gluster
01:30 kevein joined #gluster
02:09 JoeJulian Need to replace a 24 and a 48 port GigE switch (both failed simultaneously!?!?). Anyone here have any recommendations?
02:45 sunus joined #gluster
02:53 m0zes JoeJulian: bnt are my favs. netgear are decent for budget switches
02:57 JoeJulian Netgear lost me a long time ago with their customer disservice.
03:28 raven-np joined #gluster
03:57 mohankumar joined #gluster
04:21 badone kkeithley: unping
05:22 hagarth1 joined #gluster
05:24 raven-np joined #gluster
05:39 greylurk joined #gluster
06:18 hateya joined #gluster
07:17 raven-np joined #gluster
09:25 ultrabizweb joined #gluster
11:12 eightyeight joined #gluster
11:27 hateya joined #gluster
12:36 NuxRo Hi guys, happy new year
12:37 NuxRo latest "yum update" upgraded my glusterfs and all my volumes became unrechable
12:37 NuxRo if i tried to start them it said "already started", but if i do a "stop" then "start" they come back and can use them as usual
12:38 NuxRo is it normal for "yum update" to result in unusable volumes?
12:39 NuxRo ping kkeithley :)
13:32 greylurk joined #gluster
14:26 chirino joined #gluster
14:49 avati joined #gluster
14:53 a2_ joined #gluster
14:55 raven-np joined #gluster
15:00 edward1 joined #gluster
15:49 chirino joined #gluster
16:11 bdperkin joined #gluster
16:31 raven-np1 joined #gluster
16:40 RicardoSSP joined #gluster
17:12 raven-np joined #gluster
17:42 JoeJulian NuxRo: Not really, no.
18:04 NuxRo JoeJulian: thanks! after an upgrade which services ought to be restarted? I restart both glusterd and glusterfsd
18:05 JoeJulian The rpm /should/ handle that automatically. If you issued a glusterfsd restart after glusterd, though, that's why your bricks were unavailable.
18:05 JoeJulian @processes
18:05 glusterbot JoeJulian: the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more information.
18:06 JoeJulian The only thing the glusterfsd init script is really good for any more is to ensure the bricks are stopped in the right order when shutting down.
18:06 NuxRo roger that, I'll open my eyes widely at the next yum update
18:18 NuxRo JoeJulian: have you ever tried to run openstack cinder off glusterfs nfs?
18:18 JoeJulian No
18:19 JoeJulian Since cinder's for block based storage it would have to be (for now) loopback devices hosted on gluster. That would rather defeat my purpose of backing my vms on shared storage though.
18:20 rastar joined #gluster
18:20 JoeJulian This might work for that, though, in 3.4: http://www.gluster.org/community/do‚Äčcumentation/index.php/Planning34/BD
18:20 glusterbot <http://goo.gl/LglqL> (at www.gluster.org)
18:21 NuxRo the cool stuff is always in the next version :-)
18:21 JoeJulian That's been true since the 80's.
18:22 NuxRo yep :-)
18:23 JoeJulian Luckily, that's why I get to re-engineer our systems every time I finish re-engineering them. :D
18:23 NuxRo haha
18:23 NuxRo anyway, I'll update you guys once (if) I get cinder to work off NFS
18:24 JoeJulian That would be cool.
18:24 NuxRo i really don't want to duplicate my storage infra just to run cinder/iscsi
18:25 JoeJulian What are you doing with your vms?
18:25 NuxRo if that works and the swift proxy does it's job, half of my openstack will run off gluster with qcow images on local raid10
18:25 NuxRo sell them to customers :)
18:25 NuxRo hopefully
19:00 plarsen joined #gluster
19:53 polfilm Is there a web gui for gluster these days?
19:54 twx_ @yum repo
19:55 twx_ @yum
19:55 twx_ @repo
19:55 twx_ derp
19:56 semiosis polfilm: http://www.ovirt.org/Features/Gluster_Support
19:56 JordanHackworth joined #gluster
19:57 glusterbot joined #gluster
19:58 semiosis ,,(ovirt)
19:58 glusterbot http://goo.gl/fwlP5
19:59 polfilm semiosis: so no basically
20:00 polfilm semiosis: there used to be a commercial one long time ago when gluster was gluster. wonder what happened to that. ovirt i love but totally diferrent purpose
20:01 semiosis that commercial one was discontinued long ago
20:04 semiosis why do you want a gui anyway?  (just curious)
20:05 polfilm semiosis: i like pretty interfaces
20:06 semiosis oh ok
20:07 polfilm semiosis: i just got across a youtube demo of the gui, wondered what happened to it.
20:08 glusterbot joined #gluster
20:08 semiosis ,,(gmc) ?
20:08 glusterbot The Gluster Management Console (GMC) has been discontinued. If you need a pretty gui to manage storage, support for GlusterFS is in oVirt.
20:08 semiosis glusterbot: awesome
20:08 glusterbot semiosis: ohhh yeeaah
20:09 polfilm semiosis: have you seen Bright Cluster Manager GUI? there's some to be said about a properly constructed gui interface.
20:09 DataBeaver joined #gluster
20:10 semiosis @lucky bright cluster manager
20:10 glusterbot semiosis: http://goo.gl/k7Mlg
20:11 semiosis never heard of it
20:11 polfilm semiosis: example really first one i could find. weird its showing up when i type glusterfs gui in youtube. not an accident most likely.
20:13 semiosis first yt result i get for 'glusterfs gui' is managing glusterfs with ovirt
20:13 semiosis s/with/from/
20:13 glusterbot semiosis: Error: I couldn't find a message matching that criteria in my history of 393 messages.
20:13 semiosis s/with/from/
20:13 glusterbot What semiosis meant to say was: first yt result i get for 'glusterfs gui' is managing glusterfs from ovirt
20:13 semiosis glusterbot: you laggin
20:13 semiosis glusterbot: reconnect
20:13 glusterbot semiosis: Error: You don't have the owner capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
20:13 semiosis glusterbot: meh
20:13 glusterbot semiosis: I'm not happy about it either
20:14 semiosis glusterbot: reconnect
20:14 glusterbot semiosis: Error: You don't have the owner capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
20:14 semiosis whatever
20:44 stigchri_ joined #gluster
21:22 JoeJulian @reconnect
21:23 glusterbot joined #gluster
21:31 hateya joined #gluster
21:51 JoeJulian @ppa
21:51 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
21:53 plarsen joined #gluster
22:13 rodlabs joined #gluster
22:16 badone joined #gluster
22:20 hagarth joined #gluster
22:25 badone joined #gluster
22:29 badone joined #gluster
22:31 badone joined #gluster
23:15 dhsmith joined #gluster
23:21 raven-np joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary