Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:30 julim joined #gluster
00:58 mlhamburg_ joined #gluster
01:00 vimal joined #gluster
01:10 zhangjn joined #gluster
01:18 EinstCrazy joined #gluster
01:30 Lee1092 joined #gluster
01:46 mpietersen joined #gluster
01:57 harish joined #gluster
02:07 EinstCrazy joined #gluster
02:10 rafi joined #gluster
02:11 nangthang joined #gluster
02:11 harish joined #gluster
02:18 gem joined #gluster
02:18 nangthang joined #gluster
02:45 dblack joined #gluster
02:45 msvbhat joined #gluster
02:45 bharata-rao joined #gluster
02:59 atinm joined #gluster
03:19 wushudoin joined #gluster
03:26 overclk joined #gluster
03:30 haomaiwa_ joined #gluster
03:40 stickyboy joined #gluster
03:45 neha_ joined #gluster
03:50 nbalacha joined #gluster
03:52 shubhendu_ joined #gluster
03:55 haomaiwa_ joined #gluster
03:56 itisravi joined #gluster
04:07 itisravi left #gluster
04:13 kdhananjay joined #gluster
04:13 nishanth joined #gluster
04:20 gem joined #gluster
04:22 kshlm joined #gluster
04:26 zhangjn joined #gluster
04:27 zhangjn joined #gluster
04:28 zhangjn joined #gluster
04:35 anil joined #gluster
04:37 ndarshan joined #gluster
04:37 pppp joined #gluster
04:40 kshlm joined #gluster
04:44 TheSeven joined #gluster
04:44 rafi joined #gluster
04:44 ppai joined #gluster
04:51 neha_ joined #gluster
04:58 kovshenin joined #gluster
05:07 vmallika joined #gluster
05:12 atinm joined #gluster
05:22 aravindavk joined #gluster
05:30 prabu joined #gluster
05:31 kotreshhr joined #gluster
05:33 Bhaskarakiran joined #gluster
05:37 kanagaraj joined #gluster
05:42 Vaelatern joined #gluster
05:49 atalur joined #gluster
05:49 hagarth joined #gluster
05:50 atinm joined #gluster
05:51 R0ok_ joined #gluster
05:59 Vaelater1 joined #gluster
06:02 skoduri joined #gluster
06:04 Manikandan joined #gluster
06:05 ashiq joined #gluster
06:05 marcoceppi joined #gluster
06:06 hgowtham joined #gluster
06:22 F2Knight joined #gluster
06:28 karnan joined #gluster
06:33 jiffin joined #gluster
06:37 Bhaskarakiran joined #gluster
06:42 nangthang joined #gluster
06:47 arcolife joined #gluster
06:57 atinm joined #gluster
07:05 hagarth joined #gluster
07:07 DV joined #gluster
07:12 Sjors joined #gluster
07:14 spalai joined #gluster
07:18 atalur joined #gluster
07:20 sakshi joined #gluster
07:21 atinm joined #gluster
07:21 mhulsman joined #gluster
07:22 ppai joined #gluster
07:27 jtux joined #gluster
07:28 PaulCuzner joined #gluster
07:29 raghu joined #gluster
07:34 jwd joined #gluster
07:53 atalur joined #gluster
07:53 DV joined #gluster
07:54 mbukatov joined #gluster
07:57 tomatto joined #gluster
08:02 paraenggu joined #gluster
08:09 deniszh joined #gluster
08:15 Akee joined #gluster
08:16 bhuddah joined #gluster
08:20 [Enrico] joined #gluster
08:20 paraenggu joined #gluster
08:28 Saravana_ joined #gluster
08:29 mlhamburg joined #gluster
08:32 rafi joined #gluster
08:34 poornimag joined #gluster
08:39 ivan_rossi joined #gluster
08:40 rjoseph joined #gluster
08:42 ivan_rossi joined #gluster
08:45 ppai joined #gluster
08:46 fsimonce joined #gluster
08:47 Slashman joined #gluster
08:47 paraenggu left #gluster
08:49 sripathi joined #gluster
08:53 Norky joined #gluster
08:53 dusmant joined #gluster
08:59 DV joined #gluster
09:00 ctria joined #gluster
09:08 LebedevRI joined #gluster
09:09 atrius joined #gluster
09:11 anrao joined #gluster
09:16 PaulCuzner joined #gluster
09:25 ndarshan joined #gluster
09:32 Akee joined #gluster
09:32 EinstCrazy joined #gluster
09:34 Bhaskarakiran_ joined #gluster
09:39 haomaiwa_ joined #gluster
09:41 stickyboy joined #gluster
09:42 hgowtham joined #gluster
09:51 shubhendu_ joined #gluster
09:58 julim joined #gluster
10:03 jwaibel joined #gluster
10:07 Saravana_ joined #gluster
10:21 thoht_ i want to backup my VM files in my gluster share
10:21 thoht_ can i use gluster snapshot ?
10:27 nishanth joined #gluster
10:35 hagarth joined #gluster
10:36 dR0M3st3R joined #gluster
10:36 arcolife joined #gluster
10:38 Bhaskarakiran joined #gluster
10:38 DV joined #gluster
10:41 kshlm joined #gluster
10:41 EinstCrazy joined #gluster
10:44 ashiq joined #gluster
10:44 Trefex joined #gluster
10:56 dude joined #gluster
10:57 ramky joined #gluster
10:58 Guest91658 hi all! I'm having issues updating from GlusterFS 3.7.2 to 3.7.5 on RHEL7... I'm kind of stuck, could someone please help me?
10:59 Guest91658 it's a simple 4 node setup with a single volume accessed using the fuse client only
11:01 harish_ joined #gluster
11:02 ThiasDude "gluster volume status" from any 3.7.2 node says "Staging failed on 192.168.14.11. Error: Volume name get failed" (that .11 is the 3.7.5 node)
11:03 ndarshan joined #gluster
11:07 ThiasDude ...downgrading back to 3.7.2 works.
11:13 ThiasDude and I did all of the '*insecure' related changes, since that was causing problems initially
11:21 rafi joined #gluster
11:25 kovshenin joined #gluster
11:27 DV joined #gluster
11:27 Saravana_ joined #gluster
11:27 Saravanakmr joined #gluster
11:28 R0ok_ I've seen a lot of 3.7.5 upgrade related issues
11:29 R0ok_ ThiasDude: Are all your nodes on 3.7.5 ? Or is it just one node on 3.7.5 & the rest on 3.7.4 ?
11:31 spalai left #gluster
11:35 shubhendu_ joined #gluster
11:40 nishanth joined #gluster
11:41 anrao joined #gluster
11:44 atrius_ joined #gluster
11:45 DV joined #gluster
11:50 ppai joined #gluster
11:50 firemanxbr joined #gluster
11:54 ndarshan joined #gluster
11:55 ThiasDude R0ok_: I tried upgrading a single of the 4 server nodes first, from 3.7.2 to 3.7.5, and that's not working with the messages about failing to find the volume name
11:55 kotreshhr left #gluster
11:57 R0ok_ ThiasDude: I don't know if it's a good idea to have peers using different versions. I prefer having all clients & servers using the same version
11:59 overclk joined #gluster
12:03 ThiasDude R0ok_: me too, but I can't take the entire cluster down, I have always done "rolling" updates for minor releases, and here my problem is that the first node to be updated no longer works with the others
12:05 R0ok_ ThiasDude: for major upgrades, you just have to find/organize some downtime. Anyway, I haven't seen any documentation/blog on doing a rolling 3.7.5 upgrade
12:06 neha_ joined #gluster
12:06 ThiasDude R0ok_: yeah, documentation on minor upgrades seems to be non-existing, meaning from my understanding that it should "just work"
12:07 ThiasDude even though for 3.7.4+ there is the gotcha about the unsecure ports which is briefly mentioned in the announcements
12:11 Pupeno joined #gluster
12:13 nishanth joined #gluster
12:16 ndarshan joined #gluster
12:18 shubhendu_ joined #gluster
12:20 R0ok_ ThiasDude: We are still on 3.5.5 & we've been contemplating upgrading to 3.7.5. But the number of issues related to the upgrade process is making me reconsider going to 3.7.4
12:26 B21956 joined #gluster
12:37 unclemarc joined #gluster
12:37 thoht___ joined #gluster
12:38 thoht__ joined #gluster
12:41 anil joined #gluster
12:42 Saravanakmr joined #gluster
12:46 ira joined #gluster
12:54 Trefex joined #gluster
13:01 nishanth joined #gluster
13:03 kotreshhr joined #gluster
13:03 kotreshhr left #gluster
13:03 kdhananjay joined #gluster
13:03 shyam joined #gluster
13:06 chirino joined #gluster
13:07 shubhendu_ joined #gluster
13:12 mpietersen joined #gluster
13:16 SOLDIERz joined #gluster
13:29 overclk joined #gluster
13:30 ctria joined #gluster
13:30 overclk_ joined #gluster
13:30 maveric_amitc_ joined #gluster
13:32 maveric_amitc_ joined #gluster
13:35 vimal joined #gluster
13:38 overclk joined #gluster
13:38 hagarth joined #gluster
13:47 dgandhi joined #gluster
14:00 overclk joined #gluster
14:02 nbalacha joined #gluster
14:30 overclk joined #gluster
14:30 jmarley joined #gluster
14:32 RayTrace_ joined #gluster
14:34 kdhananjay joined #gluster
14:47 nage joined #gluster
14:53 ayma joined #gluster
14:55 Pupeno joined #gluster
14:55 bennyturns joined #gluster
14:56 wushudoin joined #gluster
14:58 maserati joined #gluster
15:06 P0w3r3d joined #gluster
15:09 _shaps_ joined #gluster
15:12 skylar joined #gluster
15:25 jbrooks joined #gluster
15:25 F2Knight joined #gluster
15:28 atinm joined #gluster
15:33 coredump joined #gluster
15:34 ramky joined #gluster
15:35 skoduri joined #gluster
15:39 rafi joined #gluster
15:41 overclk joined #gluster
15:42 stickyboy joined #gluster
15:44 monotek1 joined #gluster
15:45 skylar joined #gluster
15:57 RayTrac__ joined #gluster
16:07 cholcombe joined #gluster
16:13 side_control joined #gluster
16:14 nage joined #gluster
16:23 calavera joined #gluster
16:41 Gill joined #gluster
16:55 DavidVargese joined #gluster
16:55 bennyturns joined #gluster
16:56 overclk_ joined #gluster
17:01 DavidVargese hi, i've setup gluster with 4 node with 2 replica. my question is. i have 8 servers that connected to the gluster storage using gluster client. for 1st 2 server i setup the client manually. and addtional client i just clone it from the existing 2 client. however the cloned client did get updated when a file is updated unless i restart the client. do i need to setup manually for all client?
17:02 DavidVargese *did NOT get updated
17:04 aravindavk joined #gluster
17:09 Rapture joined #gluster
17:12 nage joined #gluster
17:16 JoeJulian DavidVargese: you don't "set up" clients. Haven't had to do that since gluster 3.1. You just mount the volume from a server.
17:17 JoeJulian http://gluster.readthedocs.org/en/la​test/Quick-Start-Guide/Quickstart/#s​tep-6-testing-the-glusterfs-volume
17:17 glusterbot Title: Quick start Guide - Gluster Docs (at gluster.readthedocs.org)
17:18 JoeJulian http://gluster.readthedocs.org/en/l​atest/Administrator%20Guide/Setting​%20Up%20Clients/#mounting-volumes
17:18 glusterbot Title: Setting Up Clients - Gluster Docs (at gluster.readthedocs.org)
17:20 DavidVargese JoeJulian, i followed step by step from digitalocean guide
17:20 dlambrig joined #gluster
17:20 DavidVargese JoeJulian, https://www.digitalocean.com/community/tu​torials/how-to-create-a-redundant-storage​-pool-using-glusterfs-on-ubuntu-servers
17:20 glusterbot Title: How To Create a Redundant Storage Pool Using GlusterFS on Ubuntu Servers | DigitalOcean (at www.digitalocean.com)
17:21 dlambrig left #gluster
17:22 JoeJulian Ok, then I guess I'm just confused. They seem to say the same thing.
17:22 DavidVargese JoeJulian, so i dont need to setup the clients?
17:22 DavidVargese JoeJulian, is that the cause of my problem?
17:25 Rapture joined #gluster
17:25 JoeJulian I assumed you were following some ancient instruction that had you writing vol files.
17:25 jwaibel joined #gluster
17:27 DavidVargese JoeJulian, the guide is writen on Feb 5, 2014. i dont think its ancient yet. :)
17:27 DavidVargese can u refer/point me to latest guide for ubuntu
17:31 overclk joined #gluster
17:33 JoeJulian There's nothing special for ubuntu. The only difference is the package manager that you would want to point to the gluster ,,(ppa)
17:33 glusterbot The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
17:33 JoeJulian So what isn't working?
17:33 DavidVargese the servers working fine. only on client side
17:34 JoeJulian Are you writing directly to the bricks?
17:34 DavidVargese since im using gluster client. 2 of my servers i setup manually. and the rest i just cloned from the 2.
17:34 JoeJulian @glossary
17:34 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
17:34 DavidVargese im not really sure
17:35 JoeJulian So when we say server, we're talking about the storage servers. I'm not sure if we're using the same terms.
17:35 kkeithley digital ocean's guide looks okay for the most part. I'd change "sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4" to "sudo add-apt-repository ppa:gluster/ubuntu-glusterfs-3.7"
17:36 DavidVargese JoeJulian, i think so.
17:37 JoeJulian So you've said you cloned clients, then you said you've cloned servers. If it's servers you cloned, then the problem is in the state directory, /var/lib/glusterd. That directory should not be cloned.
17:37 DavidVargese after installing the client. i add to bottom of /etc/fstab : web1:/volume_1  /storage-pool   glusterfs defaults,_netdev 0 0
17:38 DavidVargese i need to clearify thing. the bricks is not cloned.
17:38 DavidVargese only client
17:39 DavidVargese i setup the gluster with 4 server with 2 replicas. i dont all that step by step on each servers/brick
17:40 kkeithley you originally said four node, replica 2.  That means four servers.  Then you said you only set up two servers and cloned two more <something>.
17:40 DavidVargese after that on client side. at 1st i setup 2 client manually. and the rest i just clone from the 2 client.
17:40 JoeJulian Ok, that sounds reasonable.
17:41 mhulsman joined #gluster
17:41 DavidVargese JoeJulian, will that cause a problem? because right now. the server that i setup manually not cause any problem. on the clone server did not get updated when a file is edited unless a reboot.
17:43 JoeJulian A cloned *client* would not cause a problem. You write to the client mountpoint (/storage-pool) and the file is updated on all the replica servers - instantly available to any other client.
17:43 DavidVargese or is it the storage did not send update to the cloned since it identified as the same as the original client? if there a client id that i need to setup?
17:44 JoeJulian there is no client id
17:44 DavidVargese JoeJulian, yeah i know that. thats how its should be. right now on the cloned client. i need to reboot or i think need to unmmount and remount.
17:45 JoeJulian check your client logs
17:45 DavidVargese at the time, i just tried reboot. i think unmount and remount also will fix it.
17:45 DavidVargese on the original or clone?
17:46 JoeJulian You're describing something that "can't" happen, so your guess is as good as mine. I'd check every log that might be involved for an anomaly.
17:48 DavidVargese im not sure what i want to find in the logs. i just open it and there is so many lines that i dont really understand.
17:49 JoeJulian The best way to diagnose something is to truncate the logs, create the issue, then look to see what happened.
17:53 jonfatino JoeJulian: would you recommend gluster with replica 2 3 4 5 etc each time I add a node for a CDN network? or would you recommend the geo replication setup
17:53 DavidVargese its very hard for me to do that. because its on live. we use gluster to cluster web and point apache directory to gluster folder.
17:53 JoeJulian jonfatino: My opinion is that replication is only for fault tolerance.
17:54 JoeJulian ~pasteinfo | DavidVargese
17:54 glusterbot DavidVargese: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:54 jonfatino so two nodes with replica2 should work fine as the masters and then geo-replicate the rest? Now when nginx accesses the gluster client is that going to look at local server or is it going to query all the servers in the pool to see who has the latest file?
17:55 jonfatino A buddy of mine said I could just read directly from the bricks and not even use the gluster client but I don't know about that
17:55 jonfatino Its not writing so im not usre
17:56 JoeJulian jonfatino: reading is safe, writing must be through clients. Geo-rep is unidirectional so if you can do a top-down delivery structure, you can do that.
17:56 JoeJulian Personally, for cdn I prefer swift.
17:56 ivan_rossi left #gluster
17:57 jonfatino JoeJulian: do you have a good guide to setup swift for that? I assume it would be object stoage?
17:57 JoeJulian Yes, swift is object storage. Functionally equivalent to S3.
17:59 arielb joined #gluster
18:00 neofob joined #gluster
18:04 DavidVargese JoeJulian, http://fpaste.org/283813/
18:04 glusterbot Title: #283813 Fedora Project Pastebin (at fpaste.org)
18:07 pdrakewe_ joined #gluster
18:08 rafi joined #gluster
18:10 JoeJulian DavidVargese: Ok. Then on one client create a file, "echo fubar > /storage-pool/foo". On the "broken" client, "cat /storage-pool/foo" and make sure it says "fubar".
18:11 DavidVargese ok, hold on
18:12 jiffin joined #gluster
18:13 JoeJulian Assuming that works on any client but the one you think is a problem, "sed -i 's/r/z' /storage-pool/foo". Then on the problem client, cat /storage-pool/foo and it should now be "fubaz".
18:14 DavidVargese JoeJulian, http://fpaste.org/283820/44588322/ its updating right now.
18:14 glusterbot Title: #283820 Fedora Project Pastebin (at fpaste.org)
18:15 DavidVargese sed: -e expression #1, char 5: unterminated `s' command
18:17 ir8 joined #gluster
18:17 DavidVargese i just nano the file. and edit manually. and its also updating properly
18:18 DavidVargese but somehow just now. its not updating. i need to reboot to get it updated.
18:18 DavidVargese which i think unmount and remount should do it.
18:19 DavidVargese is it posible many users accessing the site may cause it?
18:21 DavidVargese right now is 2.21AM and not many users. its happen on evening.
18:21 mhulsman joined #gluster
18:23 thoht__ JoeJulian: do you use ZFS ont Linux or *BSD ?
18:27 thoht__ when we set a node disabled for let s say 2 minutes (reboot), when it comes back is it normal to have a SELF heal ?
18:29 cliluw I'm curious - what does AFR stand for?
18:30 jiffin cliluw: automatic file replication
18:31 cliluw jiffin: Ok. I was searching through the wiki and could only find the abbreviation - never the expansion.
18:32 JoeJulian DavidVargese: How are you determining that it's not updating?
18:32 JoeJulian thoht__: I just finished getting everything off of zfs on linux.
18:33 JoeJulian thoht__: yes, it's normal to have a self heal after a server has been down or unavailable.
18:34 jiffin cliluw: np
18:35 DavidVargese JoeJulian: earlier we update the site to be in maintanace mode by editing config file to foward to "under maintainance" page. how ever some of the client still stay online and not in maintainance mode untill i reboot the client.
18:36 agliodbs joined #gluster
18:37 agliodbs I am on a quest to find an HA FS which has acceptable performance on synchronous file appends.  Specifically on a database transaction log.  Can someone here make a case for gluster, particularly with performance figures?
18:37 JoeJulian The client will always stay online. Are you meaning to tell me that *your software* isn't re-reading the file?
18:37 DavidVargese later on we update the maintainance page to display estimated time when the web will back online. and same thing happen the clone did not get the update. the clone client did not display the estimated time untill we reboot.
18:38 DavidVargese JoeJulian, yes
18:38 JoeJulian Sounds like a software problem.
18:38 JoeJulian php?
18:38 DavidVargese yup
18:38 DavidVargese apache and php
18:39 DavidVargese JoeJulian, i did restart apache before rebooting. its still not reading the updated file until i reboot.
18:39 JoeJulian Are you using apc?
18:39 DavidVargese apc?
18:39 JoeJulian Check the file from bash using cat. I'm sure it's changing.
18:40 DavidVargese i did not try that just now.
18:40 DavidVargese is there known problem with apache and php with gluster?
18:43 JoeJulian Absolutely not.
18:43 JoeJulian But there are potential configuration problems that you could have with your apache/php implementation that could cause what you're seeing.
18:45 DavidVargese im using ubuntu and install apache and php via apt-get and update it prior installing. the weird thing the original did not have this problem only the clone client.
18:48 JoeJulian I think we've successfully shown that this isn't a gluster problem. I can't help you with apache or php as I don't use apache and I haven't used php in a couple years.
18:49 DavidVargese JoeJulian, yeah i think so too. thanks for the help. i really appreciate it.
18:49 JoeJulian You're welcome.
18:56 thoht__ JoeJulian: when are you leaving zfs ?
18:56 JoeJulian Just finished today.
18:58 thoht__ why are you leaving from zfs ?
18:58 thoht__ not satisfied ? too much memory used ?
18:59 JoeJulian Actually cpu was the biggest problem.
19:01 agliodbs ... anyone?
19:01 thoht__ JoeJulian: my reboot was 1 hour ago and i still see undergoing heal.. is that normal ?
19:01 ildefonso joined #gluster
19:02 JoeJulian I was trying to use compression and dedup, neither of which actually ended up being of any benefit. Since I wasn't getting anything useful out of it, and it was extremely resource hogging, I just scrapped it and went back to xfs.
19:02 akik what could be the reason for vim saying something like "file has changed since opening it" ? the file is on a glusterfs mount, small config file
19:03 thoht__ JoeJulian: not good dedup and VM backup ?
19:03 JoeJulian agliodbs: No clue. Sounds like it might be a good topic for a talk at a convention though.
19:04 JoeJulian akik: Did the file change since opening it? ;)
19:04 akik JoeJulian: no, the message comes up right after opening it
19:04 agliodbs well, the best that CephFS offers me is like 2ms per write, which is kinda slow when you're talking about writing a couple hundred bytes.
19:04 agliodbs I was hoping that gluster could do better ...
19:05 JoeJulian 2ms does seem a bit long.
19:05 ramky joined #gluster
19:05 JoeJulian We're users here. For sales you'd want to talk to Red Hat Storage.
19:06 agliodbs I was hoping to find a contributor, or a high-end user.  Sales will tell me what they think I want to hear, regardless of reality.
19:06 agliodbs this is for OSS stuff anyway
19:06 JoeJulian Actually, the'd probably just say they don't support that.
19:07 JoeJulian It would probably be faster than ceph. There's going to be less overhead. If you keep the FD open, it should be the slowest of RTT or disk response.
19:08 agliodbs yeah, I know it'll be slower than local or direct attach disk, of course.  The critical question is how much slower.
19:08 agliodbs RTT?
19:08 JoeJulian round trip time.
19:08 JoeJulian network
19:08 agliodbs ah, yea
19:09 agliodbs JoeJulian: part of the slowness with Ceph is that it's basically an object store, which means it doesn't really do appends.  If Gluster has a way to do real file appends, that will give it a leg up ...
19:09 JoeJulian Oh, going through fuse there will be a couple of context switches each way.
19:10 JoeJulian If you could write your engine to use libgfapi it would avoid those.
19:10 JoeJulian Yep, it's posix.
19:11 sadbox joined #gluster
19:12 thoht__ JoeJulian: so "undergoing heal" is a normal behavior after a reboot; but what is it doing exactly ?. and why is it so long ?
19:12 thoht__ some checksum ?
19:22 agliodbs JoeJulian: possible ...
19:25 brian joined #gluster
19:26 jwd joined #gluster
19:26 Guest25613 I have gluster working on an instance, but when I try to run a docker container that mounts a volume to the folder that's mounted to gluster, nothing appears in the docker container
19:26 JoeJulian thoht__: "gluster volume heal $vol statistics" might be of interest?
19:26 Guest25613 any ideas?
19:28 thoht__ JoeJulian: https://paste.ee/p/iEENm
19:28 glusterbot Title: Paste.ee - View paste iEENm (at paste.ee)
19:28 JoeJulian btdiehr: I think there's something about lxc and fuse...
19:28 thoht__ No. of heal failed entries: 1
19:28 thoht__ is that bad ?
19:28 JoeJulian thoht__: maybe, usually not. Check the log file (glustershd.log)
19:29 thoht__ failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running
19:29 thoht__ this is the last error
19:30 ira joined #gluster
19:30 thoht__ and the last entry is  Completed data selfheal on d501d4ae-46b1-4c74-b9ca-1f8882f7832c. source=1 sinks=0
19:31 JoeJulian afk for a bit
19:31 abyss joined #gluster
19:43 calavera joined #gluster
19:46 mhulsman joined #gluster
20:00 deniszh joined #gluster
20:02 skylar1 joined #gluster
20:04 cristian joined #gluster
20:11 DV joined #gluster
20:16 F2Knight joined #gluster
20:17 dlambrig joined #gluster
20:17 dlambrig left #gluster
20:17 jwaibel joined #gluster
20:30 mhulsman joined #gluster
20:40 vincent_1dk joined #gluster
20:43 dlambrig joined #gluster
20:44 dlambrig left #gluster
20:45 RedW joined #gluster
20:46 bivak joined #gluster
20:50 vincent_1dk joined #gluster
20:56 deniszh joined #gluster
21:00 ilbot3 joined #gluster
21:00 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
21:07 skylar joined #gluster
21:08 dblack joined #gluster
21:09 agliodbs joined #gluster
21:18 arielb joined #gluster
21:20 side_control joined #gluster
21:22 julim joined #gluster
21:36 calavera joined #gluster
21:38 stickyboy joined #gluster
21:39 jobewan joined #gluster
21:58 side_con1rol joined #gluster
22:02 side_control joined #gluster
22:07 F2Knight joined #gluster
22:36 Rapture joined #gluster
22:47 shyam joined #gluster
23:03 agliodbs joined #gluster
23:09 arielb2 joined #gluster
23:11 Rapture joined #gluster
23:22 Telsin joined #gluster
23:26 sadbox joined #gluster
23:32 arielb2 joined #gluster
23:37 gildub joined #gluster
23:56 calavera joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary