Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 farhorizon joined #gluster
00:12 ndevos joined #gluster
00:12 ndevos joined #gluster
00:38 alvinstarr joined #gluster
00:40 Teraii JoeJulian, i think i have found why the corrupted data occurt
00:40 Teraii -t
00:41 * Teraii need some other test to verify
00:44 Teraii for some reason a simple ls on the dir interrupt the sync
00:44 Teraii (ls on the slave)
00:44 * Teraii sad :(
00:51 Gambit15 joined #gluster
00:51 Teraii and after some minutes the file is back sync
00:52 Teraii (ie the node where the file is uploaded truncate the file same as the slave)
00:58 shdeng joined #gluster
01:07 farhorizon joined #gluster
01:16 BitByteNybble110 joined #gluster
01:24 BitByteNybble110 Have a disperse volume 1 x (2 + 1) = 3.  Volume heal info shows that the disperse is unhealthy after we had the NFS Ganesha service fail on one node.  Restarted the failed node, but it appears as though no files are healing.  Checked the ganesha-gfapi.log on all three servers and posted the tail here:  https://paste.fedoraproject.org/549353/14863440/
01:24 glusterbot Title: #549353 • Fedora Project Pastebin (at paste.fedoraproject.org)
01:24 BitByteNybble110 Gluster peer status shows all nodes connected.  One node reports that the transport is not connected, another reports that the device or resource is busy.
01:54 arpu joined #gluster
02:00 jdossey joined #gluster
02:01 Gambit15 joined #gluster
02:09 rastar_away joined #gluster
02:26 derjohn_mob joined #gluster
03:11 alvinstarr joined #gluster
03:19 nbalacha joined #gluster
03:26 kramdoss_ joined #gluster
03:26 gem joined #gluster
03:30 suliba joined #gluster
03:34 rjoseph joined #gluster
03:41 magrawal joined #gluster
03:43 farhorizon joined #gluster
03:57 atinm joined #gluster
04:08 jbrooks joined #gluster
04:16 RameshN joined #gluster
04:27 Shu6h3ndu joined #gluster
04:32 irated JoeJulian: we have 8 levels of directories before the final resting place for the files.
04:33 irated My local benchmark was only going 4 levels deep so im testing the code at 4 levels to see the difference.
04:34 irated IIRC, 8 Levels of Check if dir exists, Create dir, is form of lookup amplification in glusterfs right.
04:41 gyadav joined #gluster
04:43 kdhananjay joined #gluster
04:55 ppai joined #gluster
04:57 k4n0 joined #gluster
05:02 DSimko joined #gluster
05:02 DSimko Can anyone tell me why my gluster nfs.log keeps getting these messages? [2017-02-06 04:54:24.874333] E [MSGID: 114031] [client-rpc-fops.c:1549:client3_3_inodelk_cbk] 0-gv00-client-1: remote operation failed [Transport endpoint is not connected]
05:02 DSimko I would have used fpaste but it is down right now.
05:07 BitByteNybble110 joined #gluster
05:09 skumar joined #gluster
05:10 ankit_ joined #gluster
05:16 buvanesh_kumar joined #gluster
05:21 buvanesh_kumar joined #gluster
05:21 irated looks like a node is down
05:22 DSimko @Irated is that at my question?
05:22 Prasad joined #gluster
05:24 rafi joined #gluster
05:30 aravindavk joined #gluster
05:43 riyas joined #gluster
05:43 sanoj joined #gluster
05:49 prasanth joined #gluster
05:56 apandey joined #gluster
05:58 susant joined #gluster
06:08 karthik_us joined #gluster
06:10 ankit_ joined #gluster
06:13 Wizek_ joined #gluster
06:14 kotreshhr joined #gluster
06:17 rastar_away joined #gluster
06:18 cvstealth joined #gluster
06:20 susant joined #gluster
06:25 hgowtham joined #gluster
06:32 sac joined #gluster
06:33 loadtheacc joined #gluster
06:34 mb_ joined #gluster
06:42 ahino joined #gluster
06:43 msvbhat joined #gluster
06:50 [diablo] joined #gluster
06:57 nthomas_ joined #gluster
06:58 skoduri joined #gluster
07:00 mhulsman joined #gluster
07:00 sbulage joined #gluster
07:05 mhulsman joined #gluster
07:11 itisravi joined #gluster
07:14 riyas joined #gluster
07:21 jtux joined #gluster
07:27 poornima joined #gluster
07:27 Philambdo joined #gluster
07:29 pkoro joined #gluster
07:37 rastar joined #gluster
07:48 jkroon joined #gluster
08:04 ivan_rossi joined #gluster
08:14 Lee1092 joined #gluster
08:16 fsimonce joined #gluster
08:19 ankit__ joined #gluster
08:28 riyas joined #gluster
08:32 rrichardsr3 joined #gluster
08:37 musa22 joined #gluster
09:01 ShwethaHP joined #gluster
09:07 msvbhat joined #gluster
09:12 derjohn_mob joined #gluster
09:13 ankit_ joined #gluster
09:19 musa22 joined #gluster
09:31 msvbhat joined #gluster
09:34 rafi joined #gluster
09:38 pulli joined #gluster
09:38 jri joined #gluster
09:43 zoyvind joined #gluster
09:47 jwd joined #gluster
09:53 k4n0 joined #gluster
09:53 ahino joined #gluster
09:57 kotreshhr joined #gluster
10:33 jkroon joined #gluster
10:50 Seth_Karlo joined #gluster
10:54 musa22 joined #gluster
10:58 rafi_mtg1 joined #gluster
11:20 pulli joined #gluster
11:22 karthik_us joined #gluster
11:26 MadPsy When using the build-in NFS server is there a way to ensure writes are synchronous - I'm seeing operations like 'unlink' returning positive and then a delay before another client sees the file gone (with 'ls' for e.g.) ? i.e. would disabling write-behind or flush-behind help ?
11:35 DV joined #gluster
11:36 kotreshhr joined #gluster
11:42 ahino joined #gluster
12:11 pdrakeweb joined #gluster
12:15 nbalacha joined #gluster
12:17 yosafbridge joined #gluster
12:19 k4n0 joined #gluster
12:19 apandey joined #gluster
12:25 rafi_mtg1 joined #gluster
12:27 apandey_ joined #gluster
12:49 Seth_Karlo joined #gluster
12:58 rafi_mtg1 joined #gluster
12:59 musa22 joined #gluster
12:59 kotreshhr left #gluster
13:23 Seth_Karlo joined #gluster
13:34 aravindavk joined #gluster
13:36 ira joined #gluster
13:37 musa22 joined #gluster
13:38 skylar joined #gluster
13:41 buvanesh_kumar joined #gluster
13:52 rwheeler joined #gluster
13:53 kramdoss_ joined #gluster
13:56 nbalacha joined #gluster
14:00 unclemarc joined #gluster
14:01 Plam joined #gluster
14:02 rafi_mtg1 joined #gluster
14:03 BatS9 joined #gluster
14:03 skumar joined #gluster
14:05 vbellur joined #gluster
14:14 Plam hi there. I'm from Xen Orchestra project, started to work on Gluster integration in XenServer (via XO). I met some gluster team at FOSDEM yesterday :)
14:19 atinm joined #gluster
14:19 kpease joined #gluster
14:23 shyam joined #gluster
14:32 skoduri joined #gluster
14:37 ankit_ joined #gluster
14:52 vbellur Plam: welcome!
14:52 shyam joined #gluster
14:52 Hamburglr joined #gluster
14:57 squizzi joined #gluster
14:57 Plam thanks vbellur :)
14:58 Plam I'm doing more benchmarks before asking questions, I don't want to waste your time ;) So I'll be back :p
14:59 msvbhat joined #gluster
14:59 plarsen joined #gluster
15:01 rafi_mtg1 joined #gluster
15:03 Humble joined #gluster
15:10 unclemarc joined #gluster
15:16 alvinstarr joined #gluster
15:25 derjohn_mob joined #gluster
15:26 shyam joined #gluster
15:26 Seth_Karlo joined #gluster
15:26 Gambit15 joined #gluster
15:28 farhorizon joined #gluster
15:31 rwheeler joined #gluster
15:34 Hamburglr joined #gluster
15:35 snehring joined #gluster
15:36 plarsen joined #gluster
15:39 Shu6h3ndu joined #gluster
15:54 jkroon joined #gluster
15:56 kramdoss_ joined #gluster
15:57 farhorizon joined #gluster
16:03 Plam okay so first question: in disperse mode, heal seems to be not working on healing just correct chunks but whole files
16:03 Plam is it a normal behavior?
16:04 wushudoin joined #gluster
16:04 Gambit15 Easy answer, disperse mode is deprecated
16:04 Plam wat?
16:04 Gambit15 Use sharding instead
16:04 bowhunter joined #gluster
16:04 Plam you mean the whole erasure code things is deprecated???
16:04 ankit_ joined #gluster
16:05 Gambit15 Sharding also has the segmented healing behaviour you just mentioned
16:05 Plam but for a 3 hosts setup, disperse is really great in terms of space usage
16:05 Plam with what I can replace it?
16:05 Gambit15 Sharding
16:05 Gambit15 http://blog.gluster.org/2015/12​/introducing-shard-translator/
16:06 Plam but sharding is an option of some various modes
16:06 Plam I know what is sharding
16:06 Plam but some gluster devs told me not to use sharding with disperse
16:07 Plam so you said disperse/EC volume are deprecated?
16:08 Gambit15 Since I started using Gluster about 6 months ago, the topic of dispersed volumes being deprecated has come up a number of times.
16:08 Gambit15 I'm not a Gluster dev however
16:08 Plam how by what you replaced that?
16:08 Plam because it's very flexible and powerful
16:08 Gambit15 Sharding
16:08 Plam sharding is just a way to chunk data, it's **not** a volume type
16:08 Plam sharding with which mode? replicated?
16:08 Gambit15 The most common configuration is replicated-distributed with sharding enabled
16:09 Plam AFAIK, it's not the same thing in terms of "where are stored file" against Disperse
16:10 Plam eg disperse will put some files on some bricks and other on another bricks
16:10 Gambit15 No, that's distributed
16:10 Gambit15 The old disperse was a way of striping the bricks
16:11 Plam okay let me try distributed-replicated with sharding, I think I did but I had issues at the time
16:11 Plam thx for the hints
16:11 Gambit15 http://gluster.readthedocs.io/en/lat​est/Quick-Start-Guide/Architecture/
16:11 glusterbot Title: Architecture - Gluster Docs (at gluster.readthedocs.io)
16:11 Plam Gambit15: I always read the doc ;)
16:12 Plam I'm doing benchmarks since 1 month
16:12 Gambit15 Just posting because it has useful graffics detailing the differences
16:13 Gambit15 FWIW, I had big performance issues with sharding when I first tried it, and haven't yet had a chance to return to testing it. It doesn't appear to be a common case however, as most people seem to have success with it
16:16 shyam Plam: disperse is not deprecated
16:16 shyam Plam: stripe is
16:16 shyam Gambit15: disperse is different than strips, and sharding is the replacement for stripe
16:17 JoeJulian MadPsy: The kernel nfs client caches https://www.kernel.org/doc/Documenta​tion/filesystems/caching/fscache.txt
16:17 Plam shyam: okay that's what I thought
16:17 Plam what about healing process in disperse? it seems to heal a whole file and not chunks
16:18 JoeJulian Plam, Gambit15: disperse is not deprecated, stripe is.
16:18 * Gambit15 hangs head
16:19 MadPsy JoeJulian, yeah I did a bit more digging. Do you know if  write-behind or flush-behind will help with file operations reporting having completed before all the bricks have commited to disk ?
16:19 JoeJulian Damn, I should finish reading the backlog sometimes before I post.
16:20 shaunm joined #gluster
16:20 JoeJulian Plam: Isn't that normal for EC? iirc, there's no way of knowing what part of the file needs healed, just that the hash doesn't match.
16:21 Plam JoeJulian: well, the EC algorithm is making chunks right?
16:21 Plam I'm no expert
16:21 Plam just asking :p
16:21 Plam anyway, with EC+sharding it worked but I heard it's not a good idea
16:21 JoeJulian I'm not either and haven't used disperse.
16:21 Plam I like disperse, it's elegant
16:22 JoeJulian +1
16:22 Plam I've read some stuff on Reed Solomon algorithm, very interesting
16:23 Gambit15 Hmm...g'damn, how did those wires get crossed, I've had that mixup stuck in my head for a good while now :/
16:23 * Gambit15 needs a coffee
16:24 Gambit15 Plam - apologies for that confusion!
16:24 Plam no problem Gambit15 ;
16:24 Plam ;)
16:28 musa22 joined #gluster
16:39 susant joined #gluster
16:40 Hamburglr I'm confused, how do you access gluster with smb? The docs say to mount the gluster volume and then export it in smb.conf. Which method should be used to mount the volume?
16:41 JoeJulian That's got to be some old docs. The VFS which uses libgfapi is the better way.
16:42 Hamburglr so not here? http://gluster.readthedocs.io/en/latest/Administr​ator%20Guide/Setting%20Up%20Clients/#export-samba
16:42 glusterbot Title: Setting Up Clients - Gluster Docs (at gluster.readthedocs.io)
16:42 JoeJulian I'm not saying that the official docs aren't in need of updates...
16:42 Hamburglr well I'm trying to get the small file improvements w/ md-cache so however is best is what I'm looking for
16:43 JoeJulian They're not as bad as they once were, but every open source project could use documentation improvements.
16:43 JoeJulian You want to use the vfs. Any time you can avoid fuse you avoid two context changes.
16:51 riyas joined #gluster
16:55 irated JoeJulian: is there a good way around LOOKUP ampification yet?
16:56 ivan_rossi left #gluster
16:59 JoeJulian @php
16:59 glusterbot JoeJulian: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
16:59 glusterbot JoeJulian: --fopen-keep-cache
16:59 JoeJulian irated: #2 might help
16:59 irated hmmm...
17:00 irated I just destroyed my cluster
17:00 irated thinking about doing that though.
17:00 JoeJulian on purpose, I hope.
17:00 irated Yeah, I use terraform to build out the test infra in openstack
17:03 irated So my mount would looke like...
17:05 irated echo '${openstack_compute_instance_v2.gluster-​node.0.access_ip_v4}:/fsstore/mnt/fsstore  glusterfs  defaults,attribute-timeout=HIGH,entry-timeout​=HIGH,negative-timeout=HIGH,fopen-keep-cache  0 0'
17:06 JoeJulian s/HIGH/600/g
17:06 glusterbot JoeJulian: Error: u"s/HIGH/600/g An error has occurred and has been logged. Please contact this bot's administrator for more information." is not a valid regular expression.
17:06 glusterbot JoeJulian: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
17:06 JoeJulian yeah, yeah...
17:06 irated Thats what i thought
17:06 irated the fopen-keep-cache throws me off
17:06 JoeJulian timout in seconds.
17:07 JoeJulian timeout, too.
17:07 irated okay so it needs to be =XX
17:07 bowhunter joined #gluster
17:07 JoeJulian If you knew Tim, you'd want him out in seconds too.
17:07 irated lol
17:08 JoeJulian It's only not XX because the point is to choose a high number. There's no really clear way to do that factoid without making it uncomfortably long.
17:11 glusterbot joined #gluster
17:11 glusterbot` joined #gluster
17:11 glusterbot joined #gluster
17:12 BitByteNybble110 joined #gluster
17:13 glusterbot joined #gluster
17:14 glusterbot joined #gluster
17:14 kpease joined #gluster
17:15 shyam joined #gluster
17:15 JoeJulian dammit
17:15 glusterbot joined #gluster
17:16 irated so JoeJulian will those help with writes at all?
17:17 irated mount-point/LOGO/00/dc4/c8b/a46/44f/a68/file
17:18 ic0n joined #gluster
17:18 glusterbot joined #gluster
17:20 JoeJulian irated: They should help with lookups. Writes are actually pretty efficient already. Going through a fuse mount, the least efficient part is the two context switches each write op has to go through to move that data around.
17:21 irated While testing I dropped the levels of directories to 4 and that seemed to help a lot.
17:21 irated I'm assuming that is because I remove 4 levels of lookups + create?
17:21 irated s/remove/removed/
17:21 glusterbot What irated meant to say was: I'm assuming that is because I removed 4 levels of lookups + create?
17:22 JoeJulian I would have to guess that's a java thing. Perhaps it's doing directory listings at each level of the path?
17:22 irated its checking if each level exists yeahs
17:23 JoeJulian Is it just a lookup, or is it actually doing a readdir at each level?
17:24 JoeJulian If it does a readdir, then I would look to see if it's doing a stat for each file in each dir (not sure why it would, but if it is that's a lot of unneeded fops).
17:24 irated LOOKUP
17:25 irated when i profiled LOOKUP was at like 250k for 4000 files
17:27 irated https://gist.github.com/pryorda/​a96ed9f2aef1bf56e181face138091a7
17:27 glusterbot Title: gist:a96ed9f2aef1bf56e181face138091a7 · GitHub (at gist.github.com)
17:43 ahino joined #gluster
17:49 farhorizon joined #gluster
17:52 armyriad joined #gluster
17:54 bbooth joined #gluster
18:03 jkroon joined #gluster
18:05 jwd joined #gluster
18:06 shyam joined #gluster
18:08 squizzi joined #gluster
18:11 jwd joined #gluster
18:27 mb_ joined #gluster
18:27 msvbhat joined #gluster
18:37 overyander joined #gluster
18:37 overyander does gluster do deduplication?
18:37 bb_ joined #gluster
18:37 JoeJulian overyander: not yet
18:38 overyander i have a large dataset that i'm needing to replicate over a slow (1mb) connection. any suggestions?
18:38 overyander i found opendedup which looked promising, but the replication aspect is broken.
18:38 JoeJulian sneakernet
18:38 Gambit15 ZFS does however, and that's a common underlying brick FS. It's a massive memory hog however, and rarely worth it
18:39 irated Gambit15: when did you use ZFS last?>
18:39 snehring also can have negative performance impacts
18:39 Gambit15 irated, today
18:39 snehring dedup on zfs that is
18:39 Gambit15 I'm talking about ZFS' dedup, not ZFS overall!
18:39 irated oh yes, that is very very correct
18:40 irated they recommend compression over dedup iirc
18:40 Gambit15 Yup. The default compression is amazingly efficient & has barely any CPU cost.
18:40 irated yep
18:40 irated I still run on a 10 year old super micro at home
18:41 irated the right zil + l2arc make huge differences
18:41 overyander here's the scenario i'm looking for... let's say we have 15tb of data. this 15tb dedupes down to 200gb. all of this is replicated to remote site. if i add a 600gb file which only contains 1gb of unique data, i only want to send 1gb to the remote site instead of the full 600gb.
18:41 Gambit15 I compared it with gzip9 once, and I think the most I got was a 30% improvement, but at =>10x CPU load
18:41 irated since is a home system i use a ramdisk as my zil.. (DONT MAKE FUN YO!)
18:42 snehring nice
18:42 Gambit15 overyander, either use rsync or ZFS' replication features
18:43 Gambit15 That said, doesn't Gluster's geo-rep also only transfer diffs?
18:44 irated overyander: I hate to say it even if you dedup 200GB is going to take days. Might be better off with sneakernet
18:44 snehring gluster georep is a modified rsync iirc
18:45 overyander shipping the data wouldn't be very reliable due to the multiple countries it would have to go through.
18:45 overyander data originates in the US and has to go to India.
18:45 irated oh yuck..
18:45 overyander yeah
18:45 Gambit15 And you're limited to a 1Mbit connection?!
18:45 irated yeah if you are okay with the amount of time it will take...
18:46 overyander both ends have 20mb wans but when you add the latency (avg 400 - 800ms) and packet loss (3 - 10%) you end up with about 1mb
18:47 Gambit15 Eww
18:47 Gambit15 overyander, http://blog.fosketts.net/2016/08/1​8/migrating-data-zfs-send-receive/
18:48 overyander so, i'm looking for something with block level deduplication. the project folders don't contain much duplication, but when you combine all the projects there's massive duplication.
18:48 Gambit15 https://pthree.org/2012/12/20/zfs-administratio​n-part-xiii-sending-and-receiving-filesystems/
18:48 Gambit15 https://pthree.org/2013/12/18/zfs-administrati​on-appendix-d-the-true-cost-of-deduplication/
18:49 Gambit15 ...that last one on dedup for you
18:49 Gambit15 As mentioned, simply diffing & compressing your data will probably be far more efficient
18:50 overyander is the diffing file or block based?
18:51 Gambit15 Read the first pthree article
18:52 Gambit15 AFAIK, it's the most efficient method you'll currently find
18:55 nirokato joined #gluster
19:00 unclemarc joined #gluster
19:08 bb_ I was wondering if anyone was familiar with split-brain correction on sparse virtual machine files. Ive got a 1 x 2 = 2 replicated setup. I had split-brain on a file and went through the process of correcting this issue. When the file was done healing it was 4GB on one server and  9GB on the other
19:09 bb_ Does this sound okay?
19:12 flying joined #gluster
19:12 JoeJulian Yeah, it's fine.
19:13 JoeJulian Maybe not ideal but it won't hurt anything.
19:18 bb_ Thanks.
19:21 shruti` joined #gluster
19:21 sanoj_ joined #gluster
19:21 sac` joined #gluster
19:22 lalatend1M joined #gluster
19:22 farhorizon joined #gluster
19:29 Gambit15 JoeJulian: "During snapshot creation some of the fops are blocked to guarantee crash consistency. There is a default time-out of 2 minutes, if snapshot creation is not complete within that span then fops are unbarried. If unbarrier happens before the snapshot creation is complete then the snapshot creation operation fails. This to ensure that the snapshot is in a consistent state."
19:30 Gambit15 Does that mean writes are blocked during snapshot creation?
19:30 mb_ joined #gluster
19:31 bbooth joined #gluster
19:32 musa22 joined #gluster
19:41 jdossey joined #gluster
19:46 mhulsman joined #gluster
20:06 derjohn_mob joined #gluster
20:18 JoeJulian Gambit15: It does read that way, yes. Since it should be nearly instantaneous, that shouldn't really be noticeable.
20:21 Gambit15 JoeJulian, k, cheers. I want to do a one-off snapshot of a volume hosting a couple of idling VMs, and fully pausing them to do it properly will be a PITA for reasons I shan't go into
20:21 JoeJulian I believe that's the point, yes.
20:21 JoeJulian This whole snapshotting feature was specifically intended to be used with VM images.
20:23 irated JoeJulian: testing with your recommendations now.
20:23 Gambit15 If you're doing it properly, especially with busy VMs & DBs, best practice dictates you pause the VM during the snapshot period. That goes for both LVM & ZFS based snapshots (of which I understand Gluster uses the former)
20:24 irated freenas a snapshot utility now that ties in with vcenter and stuff. So it takes snapshots automagically
20:24 irated freenas has a*
20:26 Gambit15 And that's one of the main needs of the integration, to manage the VM's state during the snapshot! If you simply snapshotted the live dataset, you'd run the risk of corruptions
20:35 musa22 joined #gluster
20:50 nh2_ joined #gluster
20:51 nh2_ why is it that when I `rsync -a` a file to gluster, the mtime is truncated to microseconds?
20:52 jobewan joined #gluster
20:56 shruti joined #gluster
20:56 sac joined #gluster
20:57 nh2_ e.g. `touch -d '2017-01-01 00:00:00.123456001' myfile` doesn't work, sets it to `00:00:00.123456000`
20:57 sanoj_ joined #gluster
20:58 lalatenduM joined #gluster
21:03 bbooth joined #gluster
21:03 JoeJulian nh2_: I see what you're saying. Looking...
21:05 mhulsman joined #gluster
21:08 nh2_ JoeJulian: I cannot find any documentation on what timestamp resolution gluster supports -- but it looks like a bug to me, as I would expect it to have the same resolution as the underlying file system (XFS in my case, has nanosecond resolution)
21:08 JoeJulian So if the mtime has all 9 digits on the brick, that will pass through to the client.
21:08 JoeJulian It looks like a bug to me too.
21:08 nh2_ JoeJulian: also, if I just `touch` a new file, that does have nanosecond resolution. But not if I set it for an existing file, e.g. with `touch -d`
21:12 JoeJulian nh2_: https://github.com/gluster/glusterfs/blob/mast​er/libglusterfs/src/common-utils.c#L3800-L3841
21:12 glusterbot Title: glusterfs/common-utils.c at master · gluster/glusterfs · GitHub (at github.com)
21:13 bbooth joined #gluster
21:13 JoeJulian So it's working as intended but they seem to have suggested that this could be an easy fix.
21:13 JoeJulian Feel free to file a bug report
21:13 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
21:14 Peppard joined #gluster
21:14 JoeJulian The triage meetings are on Tuesdays, so get it in today. :)
21:24 k4n0 joined #gluster
21:24 sudoSamurai joined #gluster
21:27 jdossey joined #gluster
21:28 musa22 joined #gluster
21:32 sudoSamurai anybody using the community version of gluster on RHEL7?  I'm trying to get it installed but I can't seem to find good repo information
21:38 JoeJulian I don't know how the RHEL distro uses sigs, but the CentOS storage sig is the repo you want.
21:38 JoeJulian s/storage sig/gluster storage sig/
21:39 glusterbot What JoeJulian meant to say was: I don't know how the RHEL distro uses sigs, but the CentOS gluster storage sig is the repo you want.
21:39 Marbug joined #gluster
21:40 sudoSamurai I've tried to add the storage sig, but the only installation method I've seen uses the CentOS Extras repo to install the SIG repo.
21:41 sudoSamurai I have the Fedora EPEL repo set up, but it doesn't seem to contain the centos-release-gluster repo
21:45 JoeJulian I thought EPEL was deprecated for EL7
21:46 JoeJulian https://download.gluster.org/pub/glus​ter/glusterfs/LATEST/RHEL/EPEL.README
21:46 Seth_Karlo joined #gluster
21:49 JoeJulian http://www.itzgeek.com/how-tos/linux​/centos-how-tos/install-and-configur​e-glusterfs-on-centos-7-rhel-7.html just has you create a .repo file.
21:50 sudoSamurai JoeJulian: I've tried the info from the first link.  The centos-release-gluster package doesn't exist in any Redhat repos that I can find.  The second link goes directly to the repo and I've tried manually setting that up in yum.repos.d but it complains about the packages not being signed
22:00 ashiq joined #gluster
22:04 shyam joined #gluster
22:07 jdossey joined #gluster
22:13 k4n0 joined #gluster
22:14 msvbhat joined #gluster
22:20 JoeJulian sudoSamurai: Looks like the gpg keys are at http://mirror.centos.org/centos/
22:20 glusterbot Title: Index of /centosCentOS Mirror (at mirror.centos.org)
22:24 nh2_ joined #gluster
22:48 masber joined #gluster
23:03 farhorizon joined #gluster
23:19 jwd joined #gluster
23:22 jdossey joined #gluster
23:27 fyxim_ joined #gluster
23:27 f0rpaxe_ joined #gluster
23:29 zerick_ joined #gluster
23:29 nixpanic_ joined #gluster
23:29 niknakpa1dywak joined #gluster
23:29 nixpanic_ joined #gluster
23:29 mober joined #gluster
23:30 Nuxr0 joined #gluster
23:30 pasik_ joined #gluster
23:32 mpingu_ joined #gluster
23:33 rideh- joined #gluster
23:33 Igel_ joined #gluster
23:33 DJCl34n joined #gluster
23:33 ueberall joined #gluster
23:33 ueberall joined #gluster
23:34 DJClean joined #gluster
23:35 pocketprotector joined #gluster
23:37 jdossey joined #gluster
23:38 LiftedKilt joined #gluster
23:41 aronnax joined #gluster
23:42 XpineX joined #gluster
23:51 farhorizon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary