Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 mattappe_ joined #gluster
00:06 theron joined #gluster
00:31 badone joined #gluster
00:36 mattappe_ joined #gluster
00:42 diegows joined #gluster
00:59 khushildep_ joined #gluster
01:09 zapotah joined #gluster
01:09 mattappe_ joined #gluster
01:18 _pol_ joined #gluster
01:19 glusterbot New news from newglusterbugs: [Bug 1058526] tar keeps reporting "file changed as we read it" on random files <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058526>
01:55 harish joined #gluster
02:03 rastar joined #gluster
02:23 mattappe_ joined #gluster
02:26 rwheeler joined #gluster
02:32 _pol joined #gluster
02:39 jag3773 joined #gluster
02:57 mattappe_ joined #gluster
02:59 satheesh2 joined #gluster
03:07 satheesh1 joined #gluster
03:08 ricky-ti1 joined #gluster
03:10 kshlm joined #gluster
03:13 satheesh3 joined #gluster
03:21 satheesh1 joined #gluster
03:36 CheRi joined #gluster
03:40 vpshastry joined #gluster
03:42 shubhendu joined #gluster
03:45 davinder joined #gluster
03:56 shyam joined #gluster
03:57 RameshN joined #gluster
04:00 bala joined #gluster
04:13 aravindavk joined #gluster
04:15 itisravi joined #gluster
04:19 glusterbot New news from newglusterbugs: [Bug 1058569] [RHS-RHOS] Openstack glance image corruption after remove-brick/rebalance on the RHS nodes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058569>
04:20 _dist joined #gluster
04:24 mohankumar__ joined #gluster
04:24 _dist hey there everyone, I'm trying to get libvirt-1.2.1 to work with libgfapi for storage pools. I followed these instructions http://libvirt.org/storage.​html#StorageBackendGluster but I'm getting an input/ouput error on the start command. Anyone here tried this out before?
04:24 glusterbot Title: libvirt: Storage Management (at libvirt.org)
04:26 ngoswami joined #gluster
04:26 vpshastry joined #gluster
04:29 mohankumar joined #gluster
04:30 ngoswami_ joined #gluster
04:32 _dist figured it out, needed to set "server.allow-insecure on" in the volume. Honestly not sure why yet I'll need to read up on it
04:34 bharata-rao joined #gluster
04:35 _dist looks like virt-manager 0.9.5 doesn't work well with glusterfs (type 10). And 0.10 doesn't work over x11 yet...
04:48 kdhananjay joined #gluster
04:49 glusterbot New news from newglusterbugs: [Bug 1058569] Openstack glance image corruption after remove-brick/rebalance on the Gluster nodes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058569>
04:51 _dist so it looks like virt-manager has an error "gtk.label.set_text() argument 1 must be string, not none" when trying to look storage pool.
04:51 jporterfield joined #gluster
04:53 ndarshan joined #gluster
04:54 saurabh joined #gluster
05:11 satheesh1 joined #gluster
05:13 prasanth joined #gluster
05:19 spandit joined #gluster
05:24 rjoseph joined #gluster
05:26 vpshastry joined #gluster
05:27 ppai joined #gluster
05:29 hagarth joined #gluster
05:30 _dist so I got virt-manager 0.10.0 to work over ssh, required gir1.2-vte-290. GlusterFS still not supported, but it might be a compile option, I kinda doubt it though
05:34 meghanam joined #gluster
05:34 meghanam_ joined #gluster
05:35 dusmant joined #gluster
05:38 bala joined #gluster
05:39 bala1 joined #gluster
05:40 psharma joined #gluster
05:40 CheRi joined #gluster
05:52 kanagaraj joined #gluster
05:56 raghu joined #gluster
05:59 nshaikh joined #gluster
06:01 lalatenduM joined #gluster
06:06 leochill joined #gluster
06:18 davinder joined #gluster
06:22 davinder2 joined #gluster
06:23 benjamin__ joined #gluster
06:24 shylesh joined #gluster
06:29 davinder joined #gluster
06:31 shylesh_ joined #gluster
06:45 satheesh1 joined #gluster
06:49 hagarth joined #gluster
07:00 sputnik13 joined #gluster
07:04 satheesh3 joined #gluster
07:06 _pol joined #gluster
07:08 kr1ss joined #gluster
07:08 kr1ss left #gluster
07:15 satheesh1 joined #gluster
07:24 hagarth joined #gluster
07:24 ngoswami joined #gluster
07:31 rastar joined #gluster
07:36 jtux joined #gluster
07:43 satheesh4 joined #gluster
07:48 satheesh2 joined #gluster
07:49 ekuric joined #gluster
07:49 mick27 joined #gluster
07:52 ctria joined #gluster
07:53 qdk joined #gluster
08:03 bharata_ joined #gluster
08:11 bollo joined #gluster
08:15 eseyman joined #gluster
08:16 solid_liq joined #gluster
08:16 solid_liq joined #gluster
08:17 keytab joined #gluster
08:20 TheDingy joined #gluster
08:22 bollo joined #gluster
08:22 raghu joined #gluster
08:22 leochill joined #gluster
08:24 _pol joined #gluster
08:26 s2r2 joined #gluster
08:27 _pol_ joined #gluster
08:30 franc joined #gluster
08:30 franc joined #gluster
08:32 satheesh2 joined #gluster
08:38 ricky-ticky1 joined #gluster
08:41 satheesh4 joined #gluster
08:41 Dga joined #gluster
08:42 b0e joined #gluster
08:43 blook joined #gluster
08:45 crazifyngers joined #gluster
08:47 Dgamax joined #gluster
08:48 rjoseph1 joined #gluster
08:48 harish joined #gluster
08:48 dusmant joined #gluster
08:58 satheesh1 joined #gluster
09:00 mgebbe_ joined #gluster
09:15 herdani joined #gluster
09:16 herdani Hello, is there a way to check read/write speed for a client mount ? Tried 'hdparm' but got a 'Inappropriate ioctl for device'
09:18 rjoseph1 left #gluster
09:18 rjoseph1 joined #gluster
09:25 vpshastry joined #gluster
09:34 ngoswami joined #gluster
09:39 vpshastry joined #gluster
09:58 kshlm joined #gluster
10:00 zapotah joined #gluster
10:03 satheesh1 joined #gluster
10:08 zapotah joined #gluster
10:08 zapotah joined #gluster
10:20 vpshastry1 joined #gluster
10:21 mohankumar__ joined #gluster
10:24 dusmant joined #gluster
10:27 shylesh joined #gluster
10:28 ells joined #gluster
10:29 hybrid512 joined #gluster
10:43 TvL2386 joined #gluster
10:49 hybrid512 joined #gluster
10:53 ndarshan joined #gluster
10:54 dusmant joined #gluster
10:55 hagarth joined #gluster
10:55 RameshN joined #gluster
10:57 kanagaraj joined #gluster
11:00 vpshastry1 joined #gluster
11:03 bala joined #gluster
11:19 wica joined #gluster
11:28 diegows joined #gluster
11:30 ctria joined #gluster
11:30 ppai joined #gluster
11:32 kdhananjay joined #gluster
11:33 ira joined #gluster
11:40 jporterfield joined #gluster
11:45 jporterfield joined #gluster
11:48 Philambdo joined #gluster
11:49 jmarley joined #gluster
11:51 JCxMLnblFl joined #gluster
11:51 JCxMLnblFl left #gluster
11:53 khushildep_ joined #gluster
11:55 vpshastry1 joined #gluster
11:57 kkeithley1 joined #gluster
12:01 japuzzo joined #gluster
12:01 jmarley joined #gluster
12:02 kanagaraj joined #gluster
12:04 ndarshan joined #gluster
12:04 leochill joined #gluster
12:10 ells joined #gluster
12:11 dusmant joined #gluster
12:11 ells joined #gluster
12:12 ira joined #gluster
12:21 hagarth joined #gluster
12:25 jmarley__ joined #gluster
12:26 RameshN joined #gluster
12:29 itisravi_ joined #gluster
12:32 ricky-ticky joined #gluster
12:35 marcoceppi joined #gluster
12:35 marcoceppi joined #gluster
12:40 chirino joined #gluster
12:47 bala joined #gluster
12:48 ira joined #gluster
13:05 social kkeithley_: ping
13:06 social well anyone probably who understands libxlator.c
13:06 social my head does not really get around frame->local and it's freeing
13:07 monotek left #gluster
13:07 alugovoi joined #gluster
13:12 ngoswami joined #gluster
13:12 kkeithley_ social: pong
13:13 benjamin__ joined #gluster
13:13 kkeithley_ I'm not sure how much I understand libxlator.c either, but...  What are you seeing that's cause for concern?
13:13 ngoswami joined #gluster
13:18 dusmant joined #gluster
13:21 mattappe_ joined #gluster
13:27 social kkeithley_: cluster_markerxtime_cbk lets look on it
13:28 social we have frame->local which I got from cluster_getmarkerattr
13:28 social kkeithley_: and in cluster_markerxtime_cbk in out there is a check for if (need_unwind && local && local->xl_specf_unwind)
13:28 social kkeithley_: frame->local is lost to frame->local = local->xl_local;
13:29 social valgrind says it never gets freed :/
13:29 rjoseph joined #gluster
13:29 social kkeithley_: sample valgrind http://paste.fedoraproject.org/72354/09157731
13:29 glusterbot Title: #72354 Fedora Project Pastebin (at paste.fedoraproject.org)
13:31 mattappe_ joined #gluster
13:33 social kkeithley_: I guess shouldn't there be GF_FREE on local?
13:45 aixsyd joined #gluster
13:46 kkeithley_ social: on the surface it sure looks like it. ,,(fileabug)
13:46 glusterbot social: Please file a bug at http://goo.gl/UUuCq
13:51 social kkeithley_: I'll write patch and so on
13:53 sroy joined #gluster
13:57 rcaskey so err, gluster is not good for databases yet gluster integrates qemu...I are confused.
13:59 bennyturns joined #gluster
14:01 ira rcaskey: Virtualization is used for more than databases?
14:02 rcaskey but err...why is not recommended for arbitrary block stuff then?
14:02 rcaskey and truth be told...i'ld be using it to virtualize a database host, but still :P
14:03 JoeJulian reverse your thinking.... qemu integrates gluster.
14:03 rcaskey so...what's it doing that makes that possible?
14:03 JoeJulian there was rumor that postgres was going to look at using the api to access gluster too.
14:05 JoeJulian If you want to bug Monty, perhaps one or more of the MariaDB storage engines could be adapted as well.
14:05 rcaskey so err, I suppose there are very good reasons it doesn't support arbtirary block devices
14:06 primechuck joined #gluster
14:06 rcaskey is it sitting on top of lots of unwritten changes on the vm or something or storing them in some kind of append-only storage in gluster temporarily or what?
14:06 JoeJulian gluster? Yeah, it's just not built that way.
14:07 jmarley joined #gluster
14:08 JoeJulian qemu wrote an interface to the userspace api. Avoiding all the context switching made the vm disk i/o much more efficient.
14:08 ira JoeJulian has it right.  It isn't bad at this type of workload... FUSE is ;)
14:08 JoeJulian I do run innodb on gluster, btw. I shard my innodb files across distribute subvolumes.
14:08 rcaskey so there isn't anything in the way gluster stores data that makes it bad for tiny fragmented writes and random IO
14:09 ira No more than the filesystem it is run on top of?
14:09 ira I'd benchmark your application, certainly.  :)
14:09 rcaskey ira, cop-out answer :P
14:09 JoeJulian bs
14:09 ira The second?
14:10 rcaskey Yeah, just joking, but it's like the drug commercials "results may vary"
14:10 JoeJulian GlusterFS is a tool in your tool belt. See if it works for your use case. There are a lot of use cases and not every tool fits.
14:11 rcaskey JoeJulian, but i suspect 'run a crappy relatively low volume database' inside QEMU is well known territory
14:11 JoeJulian Apparently not.
14:11 morsik JoeJulian: btw, how big you db is on this gluster?
14:12 JoeJulian But like I say, my crappy relatively low volume database runs on its own volume mounted within my vm.
14:12 ira rcaskey:  Honestly, I test everything before putting it into prod.... call it... a hatred of phone calls. ;)
14:12 rcaskey I do to but I try to get some rules of thumb before setting aside days to setup and test :P
14:12 JoeJulian morsik: 4.4Gb
14:13 morsik tiny...
14:13 JoeJulian yep
14:13 ira JoeJulian: Highly transational?
14:13 mattapperson joined #gluster
14:13 rcaskey morsik, mine's about that way. 16gb but 15gb is just audit data
14:13 JoeJulian But I did a test for linuxcon, and was able to get better than native* on a test database I sharded across 20 dht subvolumes.
14:14 mattappe_ joined #gluster
14:14 JoeJulian Native being a single rackspace VM using one of the standard test tools whose name escapes me at the moment.
14:15 JoeJulian That was a 160Gb test.
14:16 JoeJulian I want to complete that test and do a good write-up, but I'm not going to continue spending my own money at Rackspace to figure it out. I need some OSU OSL time.
14:17 ira JoeJulian: Ping johnmark?
14:17 ira I don't know if he can help... but it is worth a shot. :)
14:18 JoeJulian Actually Lance
14:18 khushildep_ joined #gluster
14:18 blook joined #gluster
14:20 benjamin__ joined #gluster
14:21 theron joined #gluster
14:22 benjamin__ joined #gluster
14:23 theron_ joined #gluster
14:30 dbruhn joined #gluster
14:31 B21956 joined #gluster
14:33 aixsyd dbruhn: "found a 4x link that operates in 1x"
14:33 dbruhn aixsyd, not sure what you mean
14:34 aixsyd dbruhn: this switch v.v
14:35 dbruhn That's what you're seeing in the logs on the switch?
14:36 aixsyd yepper
14:36 aixsyd documentation says, "oh, just replace the switch"
14:37 dbruhn :/ ouch
14:37 aixsyd Isnt that helpful?
14:37 dbruhn I have never had to fight with infiniband like you have.
14:38 aixsyd dbruhn: its getting old, fast.
14:38 JoeJulian You tried a different cable, or swapping the cable with a different connection?
14:38 aixsyd JoeJulian: yep. I know my cables are good - directly connecting the nodes, i get full bandwidth.
14:38 khushildep_ joined #gluster
14:38 thefiguras joined #gluster
14:38 JoeJulian Well, that's the extent of my ib knowledge...
14:39 dbruhn haha
14:39 aixsyd haha
14:39 dbruhn aixsyd, you might just have a bad switch there, hate to say it.
14:39 aixsyd dbruhn: looking like it.
14:39 aixsyd no returns v.v
14:40 dbruhn if it was purchased through ebay, and the seller sold it as a working unit, you can always get ebay involved
14:40 JoeJulian +
14:40 aixsyd well, it said the unit powers on but no further testing was done. and it does power on.
14:41 JoeJulian ebay/paypal generally side with the consumer.
14:41 dbruhn Contact the seller and let them know the issue and see if they have another one they can swap out. You'd be surprised what some personal contact can get you in this anonymous tech purchase.
14:41 aixsyd gotta get my boss to do it v.v
14:42 dbruhn Last guy that sold me something bad, took it back, and has since them offered me a job. lol
14:42 aixsyd lol
14:44 dbruhn But in all honesty, my IB gear simply worked out of the box, not a lot of fussing around with it.
14:45 aixsyd dbruhn: i just unplugged a cable that said 1x when connected to the switch, plugged it into a node, says 4x (10gbs) - back to the switch 1x
14:46 hagarth joined #gluster
14:46 dbruhn Well hit your boss up on the switch and let him know it's bad
14:46 aixsyd yarp.
14:48 jobewan joined #gluster
14:50 khushildep_ joined #gluster
14:51 blook joined #gluster
14:53 aixsyd dbruhn: weird - i plugged it into tiself. now it says 4x link width, but still 2.5gpbs
14:54 glusterbot New news from newglusterbugs: [Bug 1058797] GlusterFS replicate translator does not respect the setuid permission flag during volume heal <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058797>
14:54 dbruhn aixsyd, did you check out alternate firmware for your cards, or try to find if anyone had experience with your card/switch combo
14:55 aixsyd yep, jclift was helping me. I flashed it to the newest firmware and software
14:55 aixsyd no change.
14:55 mik3 left #gluster
14:55 mik3 joined #gluster
14:58 khushildep_ joined #gluster
14:59 Peanut Hi, how can I see the magic number of a gluster file system?
15:00 Technicool joined #gluster
15:08 kdhananjay joined #gluster
15:10 JoeJulian Peanut: blow magic smoke at it?
15:10 JoeJulian Peanut: What are you trying to accomplish?
15:11 Peanut JoeJulian: I'm tracking the migration bug. libvirt has a facility to detect whether the filesystem that the image file is mounted on, is on a shared filesystem. It recognises GFS2, NFS, SMB but not gluster, apparently. Fixing this would make the chown to root:root on live migration go away.
15:12 Peanut https://www.redhat.com/archives/libv​ir-list/2013-September/msg01639.html
15:12 glusterbot Title: [libvirt] [PATCHv2] util: recognize SMB/CIFS filesystems as shared (at www.redhat.com)
15:12 JoeJulian Peanut: Ah, I thought that might be it. I looked at that too a while back. The problem is that all fuse filesystems just show up as a fuse filesystem.
15:13 Peanut JoeJulian: there's no special information in the superblock that identifies a gluster filesystem?
15:13 Peanut Or probably the first block, or whatever - libvirt has some code that attempts to detect all kinds of filesystems.
15:15 Peanut Once my colleagues go home, I'm going to shut down our guests (again ;-) and try to restart libvirt with "dynamic_ownership=0" in qemu.conf, that might also be a good workaround.
15:16 ndevos Peanut: /proc/mounts lists fuse.glusterfs on rhel6, not sure if that helps...
15:17 Peanut Same on our Ubuntu box, ndevos. So that information ought to be available somewhere.. but /proc/mounts doesn't get that from any magic number, I guess.
15:17 JoeJulian Peanut: This is how it works. It uses the statfs call to get the f_type. http://paste.fedoraproject.org/72397/13909221
15:18 glusterbot Title: #72397 Fedora Project Pastebin (at paste.fedoraproject.org)
15:19 JoeJulian That's how libvirt does it. To do it using /proc/mounts seems like it would be better, to me, but that would require someone going through the patch process with libvirt.
15:19 mattappe_ joined #gluster
15:19 ndevos Peanut: 'stat -f' shows type fuseblk, I guess libvirt uses the syscall statfs()?
15:19 * JoeJulian just said that... ;)
15:20 ndevos ah, right :)
15:21 Peanut Woudl that be sb.f_type?
15:21 JoeJulian yep
15:22 Peanut Ok, so that way one could detect a fuse filesystem, but not yet whether it's a gluster shared storage.
15:22 jkroon joined #gluster
15:23 ndevos Peanut: I guess you can use something like getmntent("/proc/mounts") and loop through that, but its not really nice
15:24 JoeJulian Right, nor any other fuse type. All fuse filesystems are 0x65735546
15:25 Peanut ndevos: not sure whether all distros even have /proc/mounts, but your suggestion at least is less yucky than parsing the output of 'mount' ;-)
15:25 NeatBasis joined #gluster
15:26 ndevos Peanut: /proc/mounts comes with a /proc filesystem, I think all Linux distros require that
15:26 JoeJulian libvirt: A toolkit to interact with the virtualization capabilities of recent versions of Linux (and other OSes)...
15:27 JoeJulian The "other OSes" may be a problem for adoption of something proc specific.
15:27 ndevos well, just dont open /proc/mount if it does not exist?
15:27 d-fence joined #gluster
15:27 Peanut JoeJulian: thanks, that's just what I was typing.
15:28 JoeJulian You could go a step further and just do the /proc/mount if it's the FUSE_SUPER_MAGIC since non-linux OSes don't have FUSE.
15:28 JoeJulian * standard disclaimers apply
15:29 ndk joined #gluster
15:30 ndevos several *bsd flavours have support for fuse
15:30 glusterbot New news from resolvedglusterbugs: [Bug 764579] volume creation fails if brick path is long <https://bugzilla.redhat.com/show_bug.cgi?id=764579>
15:30 Peanut ndevos: and does Gluster support them?
15:30 ndevos Peanut: yeah, NetBSD is supposed to work
15:31 bugs_ joined #gluster
15:31 JoeJulian NetBSD procfs filesystem can emulate a /proc filesystem
15:34 blook joined #gluster
15:35 JoeJulian interesting... mount gets the filesystem type from /proc/self/mountinfo
15:37 vpshastry joined #gluster
15:38 Peanut That's cheating ;-)
15:39 kkeithley_ fwiw, fuse's statfs (struct fuse_statfs_out, struct_kstatfs) does not have a fsid member.  Down at the bottom of the xlator stack in xlators/posix where the statfs terminates, statvfs(2) is called, but the f_fsid from the statvfs() is discarded. And if even if it were not discarded, well, what then if all the bricks a volume were comprised of had different underlying file systems.
15:40 Peanut kkeithley_: Why would the underlying filesystem of the brick matter?
15:40 kkeithley_ that's kinda my point
15:41 Peanut I'm looking for a simple way to have libvirt detect tha the filesystem for a guest image is a shared filesystem, so it doesn't do it's little chown dance with all the raceconditions that cause live migrations to fail.
15:41 kkeithley_ and the other half of my point is that fuse doesn't have an fsid anyway, so gluster couldn't return a magic number via a statfs call
15:41 kdhananjay joined #gluster
15:44 JoeJulian Peanut: Does that happen if you use gfapi?
15:45 JoeJulian http://libvirt.org/storage.​html#StorageBackendGluster
15:45 glusterbot Title: libvirt: Storage Management (at libvirt.org)
15:47 Peanut kkeithley: statfs on a /gluster filesystem returns f_type: FUSE (65735546), so it does seem to have an fsid?
15:47 JoeJulian My thought is that if that migration chown problem doesn't happen using the preferred api, then the problem is moot.
15:48 Peanut I'm not using libgfapi, no.
15:48 JoeJulian Can you test to see if the migration problem happens if you do?
15:49 Peanut JoeJulian: once I figure out how to set that up, sure.
15:49 JoeJulian :)
15:49 Peanut It was on my to-do list to test anyway, as it seems to promise better IO performance?
15:49 jag3773 joined #gluster
15:49 JoeJulian btw, you can use qemu-img to migrate your images.
15:50 T0aD lies
15:50 T0aD :P
15:50 JoeJulian yes, much better i/o
15:50 JoeJulian ... that's why it's preferred. :D
15:50 _pol joined #gluster
15:51 eshy joined #gluster
15:51 * JoeJulian pokes a T0aD with his ban stick... ;)
15:51 Peanut I'm currently not using storage pools etc, so it's going to take me some time to build this.
15:51 JoeJulian pfft.... 15 minutes... I'm timing you...
15:52 tdasilva joined #gluster
15:54 saurabh joined #gluster
15:59 bala joined #gluster
16:00 purpleidea JoeJulian: you use qemu-img to migrate somehow instead of the virsh command ?
16:01 Peanut Ok, I just tested the 'dynamic_ownership=0', and I can do live migrations again!
16:01 JoeJulian purpleidea: to migrate the image
16:02 purpleidea JoeJulian: ?
16:02 Peanut JoeJulian: migrate from where to where?
16:03 JoeJulian meh, probably doesn't matter. I'm half asleep still.
16:04 * purpleidea is also 1/2 asleep
16:06 kkeithley_ Again, fuse's statfs (struct fuse_statfs_out, struct_kstatfs) does not have a fsid member. Down through fuse, up through gluster's fuse-bridge, down the xlator stack, across the wire to the glusterfsd server, down its xlator stack to where statvfs is called, and back, there is no f_fsid or f_type.
16:06 JoeJulian Though if you're using qcow2 images it probably does... Wouldn't you need to rebase and change the backing file to one referenced by a glusterfs url? qemu-img rebase -b glusterfs://vmvol/baseimg.qcow2 glusterfs://vmvol/instance.qcow2
16:07 kkeithley_ If you look at the kernel source (fs/fuse/inode.c, line 395 or so of in convert_fuse_statfs) it has stbuf->f_type    = FUSE_SUPER_MAGIC;
16:07 JoeJulian Not sure though since I don't usually use cow
16:07 kkeithley_ #define FUSE_SUPER_MAGIC 0x65735546
16:08 JoeJulian Yeah, that's why I gave up on that on Sept 27, 2012.
16:08 kkeithley_ bottom line being that gluster doesn't have a way to report that's it's gluster, not through statfs/statvfs.
16:09 JoeJulian Once it gets to asking for patches in the kernel, I usually give up.
16:09 kkeithley_ You could, instead, try a getxattr of a known xattr
16:11 kkeithley_ what happened on Sept 27, 2012?
16:12 JoeJulian That's when I last looked at that issue.
16:12 kkeithley_ lol, and you remembered the exact date?
16:12 JoeJulian I just happened to have a file laying around that had the timestamp.
16:13 JoeJulian damn.... I should have said, "Yes, and I had pot roast for dinner that night..."
16:13 purpleidea ...and it was raining
16:13 Peanut Thanks, kkeithley_ - I agree with your 'bottom line' completely.
16:16 kkeithley_ thus my suggestion, if statfs tells you it's fuse, then try a getxattr(2) on the volume top-level-directory, e.g. trusted.gluster.gfid, and if that's successful then you know for certain it's gluster
16:17 Peanut kkeithley_: That's great -I hope libvirt would accept such a patch.
16:19 NeatBasis joined #gluster
16:21 kkeithley_ cc me on the BZ when you do. I can't guarantee anything, but it would give me an idea of who to talk to if they're unreceptive initially
16:21 Peanut JoeJulian, kkeithley_ : I'm making a quick writeup for the mailing list. Can I quote you two by nickname or do you prefer something else?
16:21 kkeithley_ sure
16:22 T0aD they prefer to be refered as laurel and hardy
16:22 Peanut ribbit
16:22 JoeJulian probably trusted.glusterfs.volume-id since gfid isn't available through the client.
16:24 dbruhn left #gluster
16:24 JoeJulian My nick is my name, so I suppose that'll work.
16:24 hagarth Peanut: are you sending it to libvirt-users?
16:25 dbruhn joined #gluster
16:25 Peanut hagarth: no, to gluster-users, just to inform people there who have had this issue.
16:26 hagarth Peanut: ah ok
16:27 Peanut I'll update the bugs, too.
16:27 Peanut But a patch is a bit more difficult, as I would need to start out with the current version of libvirt, not the one packaged by a distro.
16:28 JoeJulian looking at the source, that part hasn't changed in a long while.
16:28 benjamin__ joined #gluster
16:29 samppah Peanut: so is it possible that this would be avoided if libvirt storage pools etc are in use?
16:33 vpshastry left #gluster
16:34 Peanut samppah: yes, that's what JoeJulian suggested already.
16:34 samppah ah, sorry :)
16:35 Peanut JoeJulian: That part in libvirt changed in September 2013 to include SMB as a 'shared' filesystem.
16:35 kkeithley_ hmm, I can't seem to getfattr -n trusted.glusterfs.volume-id from the fuse mount. (I can get it on the brick)
16:36 JoeJulian kkeithley_: As root, right?
16:36 kkeithley_ yes
16:37 JoeJulian works for me with 3.4.0
16:37 kkeithley_ yeah, 3.4.2
16:37 kkeithley_ nope, with $HEAD
16:38 Ramereth JoeJulian: yes?
16:39 JoeJulian I was just saying that I should try to repeat my database performance testing down there sometime...
16:40 JoeJulian but I kind-of dropped the ball on getting set up.
16:41 Ramereth ah ok. Was hoping we didn't drop the ball. We've been pretty busy
16:41 JoeJulian me too
16:43 ells joined #gluster
16:44 kkeithley_ okay, it works with 3.4.2 on both the fuse mount and the backing volume. We must have changed something in 3.5?
16:44 kkeithley_ hagarth: ^^^
16:44 semiosis EdWyse: does this still trigger the bot?
16:44 glusterbot semiosis: EdWyse_* is now the more aptly named JoeJulian.
16:44 semiosis :O
16:44 JoeJulian hehe
16:46 Axxe joined #gluster
16:47 Axxe Hi all, i have a question, i want user Gluster for Iscisi with VMware for Vmotion and Vstorage .. it's possible ? if i start Gluster in first and all VM later ?
16:47 JoeJulian kkeithley_: 21f7ad207bdb8ddf549aa65cafc1ad95e261ec3d
16:48 JoeJulian That's not what I wanted to paste... bug 1034716
16:48 kkeithley_ grrr
16:48 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1034716 urgent, unspecified, ---, vmallika, MODIFIED , Able to set and get "trusted.glusterfs.volume-id" extended attribute from mount point
16:49 JoeJulian probably good that it shouldn't be set from the client. :D
16:49 kkeithley_ sure, not set. What's wrong with reading?
16:49 JoeJulian yeah
16:51 kkeithley_ oh well, can still get trusted.glusterfs.pathinfo
16:51 JoeJulian I was thinking that too.
16:52 kkeithley_ I liked volume-id better though. Although for no particular reason
16:52 JoeJulian I think a volume-name would be nice.
16:53 JoeJulian I think I'll file a bug / feature request
16:53 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
16:53 ndevos I dont like a a solution where the root of the volume needs to be mounted, what if we support mounting a subdir one day?
16:55 JoeJulian hence my solution of providing trusted.glusterfs.volume-name... might as well do that at the locks translator like pathinfo is.
16:55 kkeithley_ I could argue that getxattr of either of those xattrs should work regardless of whether it was mounted at root or a subdir.
16:57 JoeJulian That, and I don't see gfid being exposed as a problem either, since it's also the inode number.
16:58 kkeithley_ and it's a small matter of code to walk down (or up) the tree to find it.
16:58 rwheeler joined #gluster
16:58 JoeJulian Hrm... pathinfo wouldn't work if there's no dht(or stripe) translator.
16:59 kkeithley_ that seems like bug
17:00 theron joined #gluster
17:03 ndevos JoeJulian, kkeithley_: how about "getfattr -e text -n glusterfs.gfid2path /mnt/file-or-dir" ?
17:04 zerick joined #gluster
17:04 ndevos xlators/storage/posix/src/posix.c has quite some interesting magic it seems
17:05 kaptk2 joined #gluster
17:06 kkeithley_ ndevos: yes to the magic, and not sure about gfid2path /mnt/file-or-dir
17:06 * ndevos fails to see what gfid2path tries to do
17:07 kkeithley_ JoeJulian: Are you thinking of some situation where there isn't always a dht xlator? Even a  ... replica X $brick1 ... $brickX gives me a dht xlator
17:09 JoeJulian I wasn't sure. I hadn't gone as far as creating a volume to check.
17:10 JoeJulian Don't forget. I come from the 2.0 days when there was no pre-defined expectations in the graph.
17:10 bala joined #gluster
17:10 Peanut On my bricks, I have a hidden .glusterfs, and I just noticed that this directory uses about 25GB of storage, almost as much as the actual data-files. That's bad, isn't it?
17:10 JoeJulian Peanut: heh, nope
17:10 Peanut JoeJulian: well, it's bad because it uses up a lot of diskspace?
17:10 JoeJulian @lucky what is this .glusterfs tree
17:10 kkeithley_ dht seems redundant in a pure replica or stripe volume. Maybe we keep it as a conceptual place holder or something to make it easier to expand later on
17:10 glusterbot JoeJulian: http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
17:11 blook joined #gluster
17:12 kkeithley_ fun with links and symlinks
17:13 JoeJulian I think the fuse translator should just respond to trusted.glusterfs=1
17:13 kkeithley_ that would work
17:14 Peanut Oh, wait.. it's all linked. So it doesn't actually take up space, it just looks that way, depending on how you run du.
17:15 * kkeithley_ needs caffeine. Starbucks run
17:17 plarsen joined #gluster
17:34 semiosis my strategy of just waiting for problems to go away on their own worked yet again!
17:34 Mo_ joined #gluster
17:34 semiosis 3.5.0beta2 builds on ubuntu precise \o/
17:34 semiosis (unlike beta1)
17:36 Peanut semiosis: nice :-)
17:39 theron joined #gluster
17:39 mattappe_ joined #gluster
17:41 jobewan joined #gluster
17:43 theron_ joined #gluster
17:48 mattap___ joined #gluster
17:51 social kkeithley_: ping ? another annoying question this time about libglusterfs/src/inode.c and inode->_ctx
17:54 social kkeithley_: __inode_create alocates inode->_ctx and let's say fuse_lookup_cbk
17:55 social does inode_unref(state->loc.inode) and than state->loc.inode = inode_new (itable);
17:56 social I guess it wanted to drop the whole inode and create new one which means inode->_ctx gets leaked, doesn't it? I guess __inode_retire should clean that up?
17:57 mattappe_ joined #gluster
18:03 l0uis hi folks. what do i need to do to enable uid/gid translation for a specific client?
18:03 mattappe_ joined #gluster
18:04 B21956 joined #gluster
18:08 lpabon_ joined #gluster
18:08 japuzzo_ joined #gluster
18:11 B21956 joined #gluster
18:12 JoeJulian l0uis: You get the server.root-squash setting. That's it.
18:13 l0uis so the filter translate-uid translate-gid no longer works?
18:14 rotbeard joined #gluster
18:17 vpshastry1 joined #gluster
18:20 SteveCooling Guys.. I have a volume that lists a number of files that need healing, but starting a heal operation does not resolve it. What do I do?
18:20 SteveCooling there are no files in the split-brain list
18:22 SteveCooling stat'ing some of the files produces input/output error, which is resolved by unlinking the file on one of the mirrored bricks
18:22 SteveCooling but that doesn't remove it from the heal info list
18:25 vpshastry1 left #gluster
18:29 SteveCooling (stat from client)
18:37 JoeJulian I hate when you tell CLink repair what the problem is and the check something completely different and tell you there's no problem... grrr....
18:40 SteveCooling related to me in any way?
18:40 JoeJulian l0uis: Those options were for the filter translator. That translator isn't in the graph through any cli configurations.
18:41 JoeJulian SteveCooling: no... just a general rant... :D
18:41 SteveCooling :)
18:41 JoeJulian Then the @%$!% idiot tries talking down to me.... I'm in no mood for these morons today.
18:43 johnmark lol... ja, I *hate* that
18:44 l0uis JoeJulian: k thanks.
18:45 RedShift joined #gluster
18:46 JoeJulian l0uis: also... the filter translator hasn't had a patch (other than a license change) since May of 2010
18:46 JoeJulian I'd say it's kind-of dead.
18:47 l0uis indeed.
18:47 andreask joined #gluster
18:48 ricky-ti1 joined #gluster
18:52 sprachgenerator joined #gluster
18:54 _Bryan_ joined #gluster
18:54 _NiC joined #gluster
19:01 johnbot11 joined #gluster
19:16 mattappe_ joined #gluster
19:16 mattapperson joined #gluster
19:17 KyleG joined #gluster
19:17 KyleG joined #gluster
19:19 robinr joined #gluster
19:22 mick27 joined #gluster
19:26 frankbutt joined #gluster
19:26 frankbutt left #gluster
19:30 jobewan joined #gluster
19:58 jobewan joined #gluster
20:08 semiosis johnmark: ping
20:09 robinr hi, i've got gluster-3.3.0, the /var/log/glusterfs/glustershd.log is filling up with: [2014-01-28 15:07:42.267871] W [client3_1-fops.c:592:client3_1_unlink_cbk] 0-RedhawkShared-client-0: remote operation failed: No such file or directory
20:09 robinr [2014-01-28 15:07:42.267946] E [afr-self-heald.c:287:_remove_stale_index] 0-RedhawkShared-replicate-0: 44be481f-94b6-441f-b197-989fb893a9b5: Failed to remove index on RedhawkShared-client-0 - No such file or directory.
20:09 robinr attached is our https://dpaste.de/duQ8 configurations
20:09 glusterbot Title: dpaste.de: Snippet #255364 (at dpaste.de)
20:10 robinr it's essentially 2 server replicate. I wonder what's causing the self-heald to go mad. the screen is scrolling extremely fast as i do the tail -f; it's about "removing stale index for UUID" and "Remove operation failed: No such file or directory"
20:10 dbruhn robinr, looks like you are missing files off of one of the replicant pairs
20:11 semiosis robinr: any time you see 'remote operation failed' in a client (or shd, or nfs) log file, you should look in the brick log files for a corresponding entry, that would be the other end of the remote operation
20:11 dbruhn are you sure the bricks are healthy? or did something happen that would have removed a bunch of data off of one?
20:11 semiosis dbruhn++
20:11 robinr let me check the brick logs
20:12 robinr brick logs are running like mad:
20:12 robinr https://dpaste.de/Cd3m
20:12 glusterbot Title: dpaste.de: Snippet #255365 (at dpaste.de)
20:12 robinr cannot add a new contribution node
20:12 semiosis @learn remote operation failed as any time you see 'remote operation failed' in a client (or shd, or nfs) log file, you should look in the brick log files for a corresponding entry, that would be the other end of the remote operation
20:12 glusterbot semiosis: The operation succeeded.
20:13 semiosis robinr: possible your bricks are unmounted & glusterfs is using a dir on the rootfs?
20:13 criticalhammer joined #gluster
20:14 robinr semiosis: they are currently mounted.
20:14 robinr should i restart "glusterd" to see what happened? The partitions should be mounted all along
20:14 criticalhammer Hi, has anyone here had any good experience with the integrated PERC6 controllers found in Dell servers?
20:15 dbruhn robinr: either the links to the files are missing from the .glusterfs directory, or the links themselves are not linking to files.
20:15 dbruhn Is one brick making way more noise than the other?
20:15 robinr dbruhn; yes
20:16 robinr i checked the .glusterfs and it seems to be find
20:16 robinr they are df -h
20:16 dbruhn robinr: something is wrong with that brick, maybe see if you can figure out when the logs when crazy and see if there is something corresponding to when the issue started
20:17 robinr ok..
20:17 robinr thanks
20:17 robinr let me look around
20:17 dbruhn you might also want to make sure the filesystem under that brick is good
20:17 KyleG left #gluster
20:18 robinr thanks dbruhn and semiosis
20:18 semiosis yw
20:18 robinr got to run to a meeting
20:19 lpabon joined #gluster
20:19 johnmark semiosis: pong
20:19 semiosis see pm
20:20 johnmark ?
20:20 rwheeler joined #gluster
20:20 * johnmark looks, finds no pm
20:21 dbruhn criticalhammer, I have perch H710's in a 24 servers and minus one server they have been solid. I know it's not what you asked specifically.
20:22 criticalhammer dbruhn: thats ok
20:23 criticalhammer im trying to have a solid understanding of throughput numbers between the PERC cards and LSI Logic MegaRAID 9260-8i   card
20:24 criticalhammer but it seems like google does not provide good reports of PERC info
20:25 dbruhn That's too bad, I have an oddball stack of 8 servers over here with those 9260's in them, but no perch 6 to test against
20:26 criticalhammer how do you like the 9260s?
20:26 criticalhammer ive seen good numbers with those guys
20:28 dbruhn Honestly the servers they are in have 4x 2TB SATA disks in them, so I've never paid much attention to their performance.
20:28 dbruhn They were solid and reliable when I had them in production.
20:28 criticalhammer hmmm
20:29 criticalhammer i should go to some vendors and rent out RAID cards to test
20:29 RedShift aren't Dell PERC's LSI cards?
20:29 RedShift so performance should be similar or even the same?
20:29 RedShift I myself use HP gear
20:30 RedShift no problems there
20:30 dbruhn I have some HP stuff too, both have been about the same for stability.
20:31 criticalhammer I wish I could rent some servers and just test out setups
20:31 dbruhn I've only had a raid card go out in each, granted I have far less time on the HP stuff than the dell stuff.
20:31 criticalhammer xD
20:32 dbruhn critical hammer, do you have a local shop that specializes in small run server builds? Often they will help you put stuff together.
20:32 criticalhammer nope
20:32 RedShift criticalhammer what specs are you seeking?
20:33 criticalhammer 4 server 96TB setup that can push up to 122K/sec
20:34 criticalhammer my plan is to use 10GbE
20:34 RedShift that's gonna require memory
20:34 RedShift I mean money
20:34 criticalhammer yeah
20:34 RedShift well both actually
20:34 dbruhn how many spindles are you talking? and what kind of spindles?
20:35 criticalhammer each box has 8 4tb 7200 rpm disks in a raid6
20:35 criticalhammer spindle count, idk
20:35 criticalhammer 1 sec
20:35 dbruhn not important
20:35 dbruhn I was asking about disks
20:37 criticalhammer these disks would be coming from dell
20:37 criticalhammer 8 4tb sas disks
20:38 dbruhn 122K a sec per client? or total?
20:38 criticalhammer 4TB 7.2K RPM Near-Line SAS 6Gbps 3.5in Hot-plug Hard Drive (342-5299)
20:38 criticalhammer 1 sec let me upload a bonnie++ test I did on my current setup
20:38 criticalhammer id like to have similar speeds
20:38 criticalhammer wheres a good place to dump text?
20:39 dbruhn fpaste.org
20:40 criticalhammer http://fpaste.org/72525/09416041/
20:40 glusterbot Title: #72525 Fedora Project Pastebin (at fpaste.org)
20:40 criticalhammer so thats from a production desktop to the local SAN via NFS
20:40 criticalhammer sorry not local but remote
20:40 dbruhn NAS
20:40 dbruhn if you are using NFS
20:41 criticalhammer ah, true
20:41 criticalhammer its a 8GB fiber setup
20:41 dbruhn and you want ~ 120MB/s?
20:41 dbruhn from this test
20:41 criticalhammer around there
20:41 criticalhammer i understand that its going to be slower
20:41 criticalhammer with overhead
20:42 mattappe_ joined #gluster
20:42 criticalhammer but thats a mark id like to strive for
20:42 dbruhn kk I was going to say that your earlier statement was 122K/s
20:42 criticalhammer yeah i misread the numbers
20:42 dbruhn no worries, just making sure I read it right
20:43 dbruhn 8x 4tb disks per server?
20:43 criticalhammer yeah
20:43 criticalhammer mainly because of power issues
20:43 criticalhammer more servers = more power
20:43 criticalhammer and also we dont physically have a lot of room
20:43 dbruhn what does your current setup look like? disk counts, etc
20:44 criticalhammer controller with 3 16TB JBOD chained over i believe its iSCSI
20:45 criticalhammer its not ethernet cables but the plug ends look very similar to infiniband
20:45 dbruhn cx4 connectors
20:45 criticalhammer probably
20:45 dbruhn how many disks?
20:46 criticalhammer 8 per jbod
20:46 dbruhn Just from what I am looking at here, general numbers
20:46 dbruhn what I try and do with gluster is build my block level storage to deliver the performance I need.
20:47 dbruhn 122MB/s isn't a hard number to hit
20:47 dbruhn but you are going to have overhead at each layer
20:47 criticalhammer yeah
20:47 criticalhammer which i understand
20:47 criticalhammer unfortunately my gluster setup is just 4 desktops ive used to learn gluster administration.
20:48 criticalhammer my test gluster setup that is
20:48 dbruhn so 8x 7200RPM drives, advertised is like 1200MB/s (8x150MB/s), I always assume a 7200 RPM disk is closer to 50MB/s
20:48 mattappe_ joined #gluster
20:48 dbruhn which means if you have each server as a brick you should have about 400MB/s
20:48 criticalhammer minus ethernet overhead
20:48 dbruhn obviously there is latency and overhead issues to be considered.
20:49 dbruhn the LSI 9620 looks like it can do 2.8G/s reads, and 1.3G/s writes
20:49 dbruhn Not sure on the perc
20:50 criticalhammer im looking that up now
20:50 criticalhammer or at least trying to
20:51 dbruhn 600 read, 400 write
20:51 dbruhn with 8x 1.5tb drives
20:51 dbruhn setup with raid 6
20:51 criticalhammer where did you find that number?
20:51 criticalhammer just curious
20:51 dbruhn http://www.storageforum.net/forum/showthread.ph​p/7815-8-x-1-5TB-Perc-6i-RAID-6-Faster-Storage
20:51 criticalhammer onice
20:52 dbruhn It looks like you are on the right track to hit your goals
20:52 dbruhn You should put together a blog entry on the performance after you get things setup.
20:52 criticalhammer I'd be happy to
20:53 dbruhn It's a question that gets asked a lot, and no one ever has a good answer
20:53 criticalhammer im starting to lean towards the Dell setup and the PERC 710h
20:53 dbruhn Be interesting to see a benchmark on a drive, a drive + raid, a full raid, and then gluster after, to show the differences of what to expect
20:54 dbruhn I am running a bunch of dell 720xd's with the h710 cards and it works well
20:54 purpleidea dbruhn: and mdadm (software raid)
20:54 criticalhammer thats a lot of testing and documentation xD
20:54 sroy_ joined #gluster
20:54 dbruhn It's a community, every contribution counts ;)
20:54 criticalhammer mdadm works well if the CPU isnt being taxed
20:54 criticalhammer i run a computational cluster that uses mdadm
20:55 mattappe_ joined #gluster
20:55 criticalhammer and the headnode acts as an NFS, and the wait time skyrockets when it gets hit with 1GB of data
20:55 mattapp__ joined #gluster
20:56 criticalhammer from what ive heard gluster uses some cpu power
20:56 criticalhammer and it grows as more bricks are added
20:56 dbruhn hmm
20:57 criticalhammer id just rather not use mdadm
20:57 dbruhn I wish I had the same loads going on between my systems, I have a couple 6 server systems, and a 12 server system
20:57 purpleidea it's not that i'd like to use mdadm, specifically, but managing hardware raid is just a disaster... how do you really integrate all the different proprietary systems with your existing management?
20:58 purpleidea example, i wrote: https://github.com/purpleidea/puppet-lsi to try and help with some of this, but it isn't as much as it should be
20:58 glusterbot Title: purpleidea/puppet-lsi · GitHub (at github.com)
20:58 criticalhammer oh, good point purpleidea
20:59 dbruhn As sad as it is, I just have the dell management utilities installed and it emails me when a disk goes bad. Not the answer to all of the issues, but totally understand.
21:00 purpleidea dbruhn: right, if i gave you X hosts, how would you deploy gluster on all of them?
21:00 criticalhammer i use python scripts that ping the linux management tools provided and reports it to icinga
21:00 criticalhammer works good so far
21:00 criticalhammer granted it relies on the management tools actually working
21:01 purpleidea criticalhammer: can you elaborate on which scripts and which management tools provided?
21:01 criticalhammer sure give me a sec
21:01 dbruhn purpleidea, totally get what you mean. I honestly just use some scripts I wrote today over ssh, which is the wrong answer, but I haven't had the time to improve on it for our stuff
21:01 criticalhammer let me look over my systems and see what I have
21:01 mattappe_ joined #gluster
21:02 dbruhn didn't write today, but that's what I have today
21:03 criticalhammer so i have servers using mdadm, megaRAID admin utilities, and Adaptec utilities
21:03 criticalhammer now understand these systems where here before I was hired
21:04 criticalhammer i found the right admin utilities for each server running hardware raid. Fortunately these utilities have a command line that you can poke at with scripts.
21:04 criticalhammer so i just wrote simple scripts that execute commands and spits out statuses of the raid health
21:04 criticalhammer then pipe them into icinga
21:05 criticalhammer simple bash scripts could do this
21:05 criticalhammer i just chose python because thats what I do all my work in
21:06 criticalhammer also mdadm is really easy because all you have to do is setup email forwarding on your systems, and mdadm will automatically send you emails when stuff breaks.
21:06 purpleidea indeed
21:06 mattappe_ joined #gluster
21:07 purpleidea i'd love to see a world where hardware raid wasn't proprietary (same with uefi firmware) and i think the future of open storage like gluster needs that hardware so we can build integrated tools that don't suck
21:08 criticalhammer well opensource hardware is hard to setup
21:08 criticalhammer it would be great but its just a lot of wor
21:08 criticalhammer work
21:09 purpleidea i don't think as a niche market it would work, but if there was some sort of large open source company writing code and helping that sort of avenue... hmmm
21:09 dbruhn I wonder if it would be advantageous to submit a hardware raid request to the open compute project
21:09 purpleidea imagine: you pay $x/server currently for "enterprise os", and you pay
21:09 criticalhammer well if it does happen, id read up on the "asterisk" company
21:09 purpleidea $5 more for "open firmware"
21:09 criticalhammer they do a good job providing opensource hardware
21:10 criticalhammer at least I think so, and i think its a good model to copy
21:10 theron joined #gluster
21:11 criticalhammer well thanks dbruhn for the input
21:11 criticalhammer if this gets off the ground, and if I have time, id be happy to do a write up
21:11 dbruhn no problem, hope it works out
21:11 dbruhn I wish I would have had time to do a write up on my systems when I put them up, I was under such a time crunch
21:11 criticalhammer yeah and I may be as well
21:12 criticalhammer Ill be doing benchmarks
21:12 criticalhammer which I can track and document
21:12 criticalhammer but all the fun extras may not happen
21:13 dbruhn Like I said earlier every contribution helps
21:15 sroy_ joined #gluster
21:20 mattappe_ joined #gluster
21:21 sulky joined #gluster
21:25 _pol joined #gluster
21:27 mattappe_ joined #gluster
21:34 sulky joined #gluster
21:34 _pol joined #gluster
21:35 ells joined #gluster
21:41 JoeJulian purpleidea http://www.coreboot.org/TianoCore
21:41 glusterbot Title: TianoCore - coreboot (at www.coreboot.org)
21:43 _pol joined #gluster
21:47 purpleidea JoeJulian: cool... have you tried this?
21:50 ells_ joined #gluster
21:50 sroy joined #gluster
21:52 mattapp__ joined #gluster
21:52 _pol joined #gluster
21:53 JoeJulian Not that specifically, but I do have a couple servers that run coreboot
21:59 zapotah joined #gluster
22:00 purpleidea interesting! i'd love for you to writeup a post about how it's working and your impressions
22:01 a2 joined #gluster
22:02 sulky joined #gluster
22:03 semiosis submitted a proposal for a devnation talk :O
22:25 robinr joined #gluster
22:39 JoeJulian Every time I see that out of the corner of my eye, I read it as a "divination" talk. Must be for PotterCon...
23:00 flakrat_ joined #gluster
23:01 flakrat_ left #gluster
23:03 Dga joined #gluster
23:09 mattappe_ joined #gluster
23:12 mattap___ joined #gluster
23:14 mattappe_ joined #gluster
23:24 mattappe_ joined #gluster
23:24 criticalhammer left #gluster
23:27 mattapperson joined #gluster
23:34 gdubreui joined #gluster
23:37 mattappe_ joined #gluster
23:38 mattapperson joined #gluster
23:40 gmtech_ joined #gluster
23:45 overclk joined #gluster
23:45 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary