Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 harish joined #gluster
00:00 plarsen joined #gluster
00:01 haomaiwa_ joined #gluster
00:14 luizcpg___ joined #gluster
00:34 RameshN joined #gluster
00:43 plarsen joined #gluster
01:00 harish joined #gluster
01:01 haomaiwa_ joined #gluster
01:19 EinstCrazy joined #gluster
01:48 EinstCra_ joined #gluster
01:49 hagarth joined #gluster
02:00 EinstCrazy joined #gluster
02:03 harish joined #gluster
02:58 sakshi joined #gluster
03:03 Lee1092 joined #gluster
03:16 Chinorro joined #gluster
03:17 ahino1 joined #gluster
03:22 DV joined #gluster
03:30 Chinorro joined #gluster
03:35 gem joined #gluster
03:37 DV joined #gluster
03:40 glisignoli joined #gluster
03:43 cliluw joined #gluster
03:44 jiffin joined #gluster
03:44 shubhendu joined #gluster
03:45 tswartz joined #gluster
03:47 atinm joined #gluster
03:50 Chinorro joined #gluster
03:51 geniusoftime joined #gluster
03:51 geniusoftime is it a bad idea to create a gluster volume on mount points and not a sub directory
03:51 geniusoftime i get a warning when trying to create a volume
03:51 geniusoftime "Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior."
03:53 geniusoftime actually just realised why this is a bad idea
03:53 geniusoftime if the mount dissapears...
03:58 itisravi joined #gluster
03:59 Chinorro joined #gluster
04:02 tdasilva joined #gluster
04:06 overclk joined #gluster
04:13 kdhananjay joined #gluster
04:14 shubhendu joined #gluster
04:25 kshlm joined #gluster
04:25 RameshN joined #gluster
04:27 troj joined #gluster
04:28 gem joined #gluster
04:30 kkeithley1 joined #gluster
04:30 nehar joined #gluster
04:40 kanagaraj joined #gluster
04:45 ovaistariq joined #gluster
04:45 vmallika joined #gluster
04:46 ndarshan joined #gluster
04:52 ashiq_ joined #gluster
04:54 gowtham joined #gluster
04:57 EinstCra_ joined #gluster
04:57 ovaistar_ joined #gluster
04:58 AceBlade258 joined #gluster
04:59 ppai joined #gluster
05:01 kovshenin joined #gluster
05:02 nishanth joined #gluster
05:07 dlambrig_ joined #gluster
05:08 nehar joined #gluster
05:10 _fortis joined #gluster
05:11 troj joined #gluster
05:14 hagarth joined #gluster
05:14 tswartz joined #gluster
05:14 prasanth joined #gluster
05:15 valkyr1e joined #gluster
05:19 hchiramm joined #gluster
05:19 hgowtham joined #gluster
05:22 troj joined #gluster
05:23 Apeksha joined #gluster
05:25 ovaistariq joined #gluster
05:30 anil_ joined #gluster
05:32 Manikandan joined #gluster
05:32 karthik___ joined #gluster
05:32 Manikandan joined #gluster
05:34 troj joined #gluster
05:43 rafi joined #gluster
05:43 rastar joined #gluster
05:44 karnan joined #gluster
05:45 haomaiwa_ joined #gluster
05:48 plarsen joined #gluster
05:49 nbalacha joined #gluster
05:52 haomaiwa_ joined #gluster
05:56 troj joined #gluster
05:57 anmol joined #gluster
05:59 Saravanakmr joined #gluster
06:01 AceBlade2581 joined #gluster
06:01 poornimag joined #gluster
06:01 haomaiwa_ joined #gluster
06:08 sakshi joined #gluster
06:09 pur joined #gluster
06:14 ramky joined #gluster
06:16 atalur joined #gluster
06:17 spalai joined #gluster
06:21 Arrfab joined #gluster
06:33 skoduri joined #gluster
06:42 gbox joined #gluster
06:45 atinm joined #gluster
06:45 nbalacha joined #gluster
06:51 jwd joined #gluster
06:52 liibert joined #gluster
07:01 haomaiwa_ joined #gluster
07:03 troj joined #gluster
07:09 jtux joined #gluster
07:09 ahino joined #gluster
07:15 R0ok_ joined #gluster
07:15 mhulsman joined #gluster
07:22 troj joined #gluster
07:26 robb_nl joined #gluster
07:27 gbox glusterd & glusterfsd are running but the filesystem has become completely unresponsive.  I was performing a full scan (find | stat) to check for problems.  Has anyone encountered this situation?
07:32 nbalacha joined #gluster
07:37 mbukatov joined #gluster
07:39 gbox forced restart of glusterfsd seemed to work, although resulted in "forced unwinding frame"
07:40 sakshi joined #gluster
07:41 arcolife joined #gluster
07:41 fsimonce joined #gluster
07:42 jtux joined #gluster
07:42 atinm joined #gluster
08:01 haomaiwang joined #gluster
08:01 jiffin joined #gluster
08:06 kshlm joined #gluster
08:09 baojg joined #gluster
08:10 jri joined #gluster
08:12 harish joined #gluster
08:13 [Enrico] joined #gluster
08:13 sakshi joined #gluster
08:21 Wizek joined #gluster
08:26 XpineX joined #gluster
08:27 deniszh joined #gluster
08:30 ivan_rossi joined #gluster
08:32 kdhananjay joined #gluster
08:37 ctria joined #gluster
08:49 Gaurav_ joined #gluster
08:58 spalai joined #gluster
09:01 haomaiwa_ joined #gluster
09:07 JesperA joined #gluster
09:11 ovaistar_ joined #gluster
09:13 Jules- joined #gluster
09:14 ovaistariq joined #gluster
09:14 RameshN joined #gluster
09:21 gem joined #gluster
09:21 Manikandan joined #gluster
09:26 luizcpg joined #gluster
09:33 kdhananjay joined #gluster
09:37 Slashman joined #gluster
09:37 itisravi joined #gluster
09:46 Manikandan joined #gluster
09:47 nbalacha joined #gluster
09:50 Manikandan_ joined #gluster
09:53 gem_ joined #gluster
09:53 sabansal_ joined #gluster
09:54 kore joined #gluster
10:01 7GHAAKSGA joined #gluster
10:07 luizcpg joined #gluster
10:16 ashiq_ joined #gluster
10:24 haomaiwang joined #gluster
10:25 Manikandan joined #gluster
10:27 gem_ joined #gluster
10:32 aravindavk joined #gluster
10:35 rastar joined #gluster
10:44 jwaibel joined #gluster
10:52 nbalacha joined #gluster
10:55 rastar joined #gluster
10:59 ashiq_ joined #gluster
11:01 haomaiwa_ joined #gluster
11:02 alghost joined #gluster
11:04 Manikandan joined #gluster
11:04 kore why am i getting 500KB/s write speed to a 2 node replica volume after I hit performance.cache-size? nodes connected with 1G
11:09 matclayton joined #gluster
11:10 matclayton Is it safe to run 3.6.9 clients against 3.6.3 servers?
11:13 skoduri joined #gluster
11:22 JesperA joined #gluster
11:22 deniszh1 joined #gluster
11:24 deniszh joined #gluster
11:32 ira joined #gluster
11:33 farblue joined #gluster
11:36 ppai joined #gluster
11:46 hackman joined #gluster
11:47 shubhendu joined #gluster
11:48 nishanth joined #gluster
11:51 Norky joined #gluster
11:53 johnmilton joined #gluster
11:55 farblue Hi all :) I’m experimenting with GlusterFS and am overall very happy but I have one application that has a few very large data files (3 or 4 Gb) which is pretty slow. Are there some simple steps I can take to improve performance with these very large files without impacting general performance? These files are only read/modified from one app (cross-server sync and nfs are not an issue).
11:59 kdhananjay joined #gluster
12:00 spalai left #gluster
12:00 prasanth joined #gluster
12:01 haomaiwa_ joined #gluster
12:11 gbox joined #gluster
12:19 farblue_ joined #gluster
12:21 post-factum farblue: what do you mean "pretty slow"?
12:22 unclemarc joined #gluster
12:22 farblue Well, the application in question is Atlassian Crucible and if I access the commit history for one of the repos it is tracking that has a 4Gb data.bin file it can take 45seconds to come back with info while doing the same on a repo with a 200Mb data.bin file takes abour 2 seconds to return
12:22 bluenemo joined #gluster
12:23 farblue I wondered whether the larger files were not being cached or something
12:23 21WAAHTBO joined #gluster
12:23 farblue I’m going to assume Crucible is doing random access on the 4Gb data file
12:23 post-factum farblue: is sharding enabled for gluster volume?
12:24 farblue The volume is a dispersed volume over 5 servers with a redundency of 2
12:24 post-factum farblue: haven't tried dispersed volumes, sorry :(
12:24 farblue it’s not a massive shared filesystem - I’m using it for protection against hardware failure rather than for massive storage etc.
12:25 farblue post-factum: are there any general tips you could suggest I investigate?
12:25 post-factum farblue: if you have enough space, you may try separate distributed-replicated volume with sharding enabled
12:25 farblue I’m pretty new to Gluster so what I have is basically an ‘out the box’ config
12:26 farblue ditributed-replicated would be the equiv. of a RAID 10 setup I assume?
12:26 post-factum correct
12:27 farblue I’ve not got enough space for that I’m afraid :(
12:27 farblue do you know if there’s a default max file size for cached files?
12:27 post-factum you may create new bricks on existing partitions under different folders
12:27 post-factum max file size for cache is set via volume option
12:29 farblue do you know where the file cache actually is?
12:29 farblue is it cached in ram? could I point it at an SSD?
12:30 EinstCrazy joined #gluster
12:30 farblue and I don’t understand the difference between the ‘performance.cache-max-file-size’ value and the ‘performance.cache-size’ value
12:31 shubhendu joined #gluster
12:31 nbalacha joined #gluster
12:32 nishanth joined #gluster
12:35 Manikandan joined #gluster
12:37 post-factum farblue: will try to google my ML thread about caching now
12:37 farblue I’m googling but not getting very far - just lots of sites cloning the ‘tuning options’ table
12:38 kanagaraj joined #gluster
12:38 post-factum farblue: https://www.gluster.org/pipermail/glu​ster-devel/2015-September/046611.html
12:38 glusterbot Title: [Gluster-devel] GlusterFS cache architecture (at www.gluster.org)
12:38 farblue cool
12:38 post-factum farblue: I asked the same questions and got that answer. I believe it is still relevant
12:38 farblue I’ll read it now :)
12:43 nbalacha joined #gluster
12:45 post-factum farblue: as for SSD, you may use volume tiering for speeding up volume with SSD
12:45 post-factum farblue: dont know, however, if that works with sharded volumes. i strongly advise you to enable sharding for volumes with large files in distributed scheme
12:46 plarsen joined #gluster
12:52 nehar joined #gluster
12:58 aravindavk joined #gluster
13:01 haomaiwa_ joined #gluster
13:01 anmol joined #gluster
13:10 bennyturns joined #gluster
13:12 archit_ joined #gluster
13:26 overclk joined #gluster
13:26 matclayton joined #gluster
13:31 coredump joined #gluster
13:31 dgandhi joined #gluster
13:32 dgandhi joined #gluster
13:33 dgandhi joined #gluster
13:37 mhulsman1 joined #gluster
13:42 AceBlade258 joined #gluster
13:43 robb_nl joined #gluster
13:43 ggarg joined #gluster
13:45 mpietersen joined #gluster
13:45 hamiller joined #gluster
13:55 DV joined #gluster
13:58 DV__ joined #gluster
14:03 frakt joined #gluster
14:03 shyam joined #gluster
14:03 haomaiwang joined #gluster
14:16 skylar joined #gluster
14:17 jwrona joined #gluster
14:18 kdhananjay joined #gluster
14:19 plarsen joined #gluster
14:23 hagarth joined #gluster
14:27 jwrona Hello! I've been wondering for a while, is there a way I can make a distributed replicated volume where replicas are created in a cyclic manner? Eg. 3 node cluster, NodeA places its replicas on NodeB, NodeB on NodeC and NodeC on NodeA. Something similar to HP Vertica's K-Safety https://my.vertica.com/docs/7.1.x/HTML/index.h​tm#Authoring/ConceptsGuide/Components/K-Safety.htm
14:29 coredump joined #gluster
14:33 post-factum gluster volume create foobar replica 2 node1:/brick1 node2:/brick1 node2:/brick2 node3:/brick2 node3:/brick3 node1:/brick3
14:33 dgandhi joined #gluster
14:33 post-factum if i got your idea correctly
14:35 haomaiwa_ joined #gluster
14:35 dgandhi joined #gluster
14:37 Saravanakmr joined #gluster
14:37 jwrona post-factum: YES! This looks like it, I'll give it a try. Thanks.
14:39 plarsen joined #gluster
14:40 dgandhi joined #gluster
14:42 johnmilton joined #gluster
14:45 sakshi joined #gluster
14:48 gem_ joined #gluster
14:49 rafi1 joined #gluster
14:49 Apeksha joined #gluster
14:50 wushudoin joined #gluster
14:51 Apeksha joined #gluster
14:52 Apeksha joined #gluster
14:57 AceBlade258 joined #gluster
15:00 spalai joined #gluster
15:00 Apeksha_ joined #gluster
15:01 haomaiwa_ joined #gluster
15:04 post-factum jwrona: np
15:05 anmol joined #gluster
15:05 farblue post-factum: I can’t find anything that’s going to help me with the performance on these very large files :(
15:05 johnmilton joined #gluster
15:06 farblue Looking at the top write file info, it looks like Atlassian Crucible is hammering these large files for writes as well as reads
15:07 farblue Countfilename
15:07 farblue =======================
15:07 farblue 2645585/Service/crucible/crucible/crucible​-data/var/cache/Website2/revcache/data.bin
15:07 matclayton left #gluster
15:07 farblue 1492367/Service/crucible/crucible/crucible-da​ta/var/cache/ControlPanel3/revcache/data.bin
15:07 farblue 1108284/Service/crucible/crucible/crucible-da​ta/var/cache/SharedLibrary/revcache/data.bin
15:07 post-factum @paste
15:07 glusterbot post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
15:07 farblue each of those three files is over 2Gb
15:07 post-factum i expect dispersed volume might be slow on writes
15:07 farblue very likely
15:08 post-factum farblue: so you would want to try distributed-replicated with sharding
15:09 farblue out of 5 servers I want support for 2 being out of the cluster at any one time
15:09 farblue I don’t think I can do that with distributed-replicated
15:12 farblue I guess I could try a standard RAID5 style setup and see if the performance is better than dispersed. Problem then is the time it will take to rebuild the lost bricks when they are added back to the pool
15:13 farblue annoyingly, it works pretty well for all the other data stored on the volume
15:16 farblue so if I specified a replica count of 3 on a distributed replicated setup then the first 3 bricks specified would become replica 1 and so on in sets of three?
15:17 atinm joined #gluster
15:18 post-factum first 3 is replica 3
15:20 farblue I can’t work out a layout of bricks that will give me a redundency of 2 across 5 servers
15:20 gbox joined #gluster
15:21 farblue if I partition disks into multiple bricks then I still need to offset the use of the bricks so I can still drop 2 disks
15:22 farblue I think I might need to change tactic - stick with the dispersed setup I have but move these large files onto local SSD and then regularly rsync them for backup or something
15:22 ggarg joined #gluster
15:23 gem_ joined #gluster
15:24 spalai left #gluster
15:24 post-factum farblue: try to study about tiering in gluster, that may help
15:26 EinstCrazy joined #gluster
15:27 farblue hmm, interesting
15:29 farblue although from the wording the ‘cold’ tier doesn’t keep a copy of the data in the ‘hot tier’ so I’d still need to protect data in the ‘hot tier’ against data loss
15:34 haomaiwang joined #gluster
15:36 anmol joined #gluster
15:39 farblue Is it possible to create a 3-brick mirror volume?
15:39 post-factum farblue: with replica 3, yes
15:40 farblue hmm, although, if this software is constantly writing to a 4Gb file then mirroring is still going to be slow
15:42 farblue I think I’ll have a chat with Atlassian because these files are named on disk as though they are cache files. I’d like to know what would happen if they were lost. Maybe the solution is to accept the loss of these files in case of machine fail and rebuild them
15:44 farblue then I can drop them on a local SSD
15:47 dlambrig_ joined #gluster
15:48 liibert joined #gluster
15:53 atinm joined #gluster
15:59 jiffin joined #gluster
16:01 haomaiwa_ joined #gluster
16:09 kvigor joined #gluster
16:12 robb_nl joined #gluster
16:15 shakor joined #gluster
16:16 shakor Hello, small question: I have 2 gluster servers up and running. With volume data. This data volume is being mounted by a "client". What happens to the mount if lets say server 1 goes down?
16:16 plarsen joined #gluster
16:16 shakor Do I need to edit the mount to mount server 2?
16:16 farblue how is the volume data distributed between the servers?
16:16 shakor mirror
16:16 farblue if you are using the fuse client I belive it automatically fails over
16:17 shakor What do you mean with the fuse client?
16:18 shakor I currently just use fstab with server1:/volume glusterfs
16:18 farblue that’s the fuse client
16:18 ctria joined #gluster
16:18 shakor farblue: hehe - thanks.
16:18 shakor So it should failover automatically?
16:19 shakor That sounds almost too good to be true :)
16:19 farblue I’m pretty new to this but yes, my undersanding is that the fuse client connects to all the servers in the cluster directly and will notice when one dies and switch over
16:20 shakor farblue: thanks for the info. I am very pleased to hear that.
16:21 kvigor left #gluster
16:21 farblue I’d double-check it but that’s certainly my understanding :)
16:21 shakor I am going to try to reproduce this with 2 test servers, see how stable it is.
16:22 farblue http://unix.stackexchange.com/questio​ns/213705/glusterfs-how-to-failover-s​martly-if-a-mounted-server-is-failed
16:22 glusterbot Title: mount - GlusterFS how to failover (smartly) if a mounted Server is failed? - Unix & Linux Stack Exchange (at unix.stackexchange.com)
16:23 jiffin joined #gluster
16:23 farblue make sure, of course, that the client can actually connect to all the cluster servers - you don’t want firewalls or network routing limits preventing the client talking to all the servers :)
16:26 shakor haha seems like a valid hint
16:30 AceBlade258 joined #gluster
16:33 coredump joined #gluster
16:36 liibert joined #gluster
16:44 chirino_m joined #gluster
16:45 shubhendu joined #gluster
16:48 carnil joined #gluster
16:49 arcolife joined #gluster
16:49 carnil Hi. I haven't found a reason for that after googling a while, I have a cluster migrated from 3.2.x to 3.5.x (Debian wheezy -> Debian jessie). After the update all files have link reference count 0, and trying to access them result in "No such file or directory"
16:49 carnil then the second attempt tough is fine, can someone give some help with this?
16:49 emitor joined #gluster
16:50 calavera joined #gluster
16:51 spalai joined #gluster
16:51 gbox joined #gluster
16:52 lliibert joined #gluster
16:55 skoduri joined #gluster
16:58 jlp1 joined #gluster
16:58 spalai left #gluster
17:00 overclk joined #gluster
17:01 shaunm joined #gluster
17:02 calavera joined #gluster
17:04 jiffin joined #gluster
17:04 RameshN joined #gluster
17:05 Gaurav_ joined #gluster
17:08 ovaistariq joined #gluster
17:09 JoeJulian carnil: It's been so long since any of us have used 3.2, I can't remember what changes were done between those older versions that might cause that.
17:09 JoeJulian Sounds like something self-heal related though. Maybe gfid.
17:09 JoeJulian Oh, actually, definately gfid. I remember now.
17:09 JoeJulian @lucky what's this new .glusterfs directory
17:09 glusterbot JoeJulian: https://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
17:10 JoeJulian I'm pretty sure it has to do with ^ that transition.
17:12 calavera joined #gluster
17:13 rafi joined #gluster
17:14 squizzi_ joined #gluster
17:14 calavera_ joined #gluster
17:17 tbm_ joined #gluster
17:18 emitor I'm having an issue with my gluster 3.7.6 on a replica 2 configuration with XFS file system. I have it mounted by NFS on a server that runs a rsync to make some backups using the 'cp -al' command and I'm getting the following error message "cp: cannot hard link 'FILENAME' to 'FILENAME_COPIED': No space left on device" but right now I have 989GB of
17:18 emitor free space on the gluster and I can do a cp of the file without any problem. Also I've tried to do a "cp -al" mounting gluster using glusterfs and I got the same error message. Can someone give me an idea to where the issue might be?
17:18 calavera joined #gluster
17:19 JoeJulian emitor: Could you be running out of inodes?
17:19 emitor I've tried the df -i command
17:19 emitor and i have 1% used
17:19 emitor is that might be wrong?
17:20 JoeJulian You're checking that at the brick level?
17:20 JoeJulian My first guess would be that you have a brick that didn't mount and you've just filled up the root of your server.
17:21 JoeJulian Check each brick server and check the brick log. Check quotas.
17:21 JoeJulian I'd help you in more detail, but I have to run an errand into downtown Seattle.
17:23 emitor no problem JoeJulian
17:23 emitor I'll be here for a while
17:23 tdasilva joined #gluster
17:23 emitor i'll check that meanwhile
17:24 Ulrar joined #gluster
17:28 jiffin1 joined #gluster
17:29 Manikandan joined #gluster
17:32 gem_ joined #gluster
17:36 Gnomethrower joined #gluster
17:36 jiffin1 joined #gluster
17:45 emitor JoeJulian, I've did the df -i in a brick level and gives me an 1% used. The brick it's correctly mounted. The Quotas are not applied on the path where I'm working on. I'm checking the brick's logs and I see this error message a lot "failed to get inode ctx for". And I'm not using XFS on this server, I'm using ext4
17:48 jiffin joined #gluster
17:48 7GHAAKWDK joined #gluster
17:49 AceBlade258 left #gluster
17:49 gbox emitor:  If you google that you'll see some things related to quota but also to the temporary files rsync creates
17:50 gbox emitor: If you know how rsync works it does not seem like the ideal method for gluster.  Have you tried the rsync options that don't create temporary files?  --in-place or --whole-file?
17:50 gbox --inplace
17:54 emitor hi gbox, I didn't tried an option that don't create temporary files. why is that method not ideal for gluster?
17:56 gbox emitor: rsync uses a delta algorithm to transfer small parts of files instead of the whole file.  FOr gluster this would result in lots of changes (assuming you're using replication).  There is a discussion where this problem is acknowledged.  The user fixes it by having the temp files go into /tmp on the brick.
17:57 gbox --temp-dir
17:57 pur joined #gluster
17:59 gbox emitor: This discussion: http://www.gluster.org/pipermail/g​luster-users/2015-June/022231.html
17:59 glusterbot Title: [Gluster-users] Double counting of quota (at www.gluster.org)
18:01 jiffin joined #gluster
18:02 emitor thanks gbox, i'm going to test that. Anyway I don't think that would fix the "No space left on device"
18:04 jri joined #gluster
18:04 calavera joined #gluster
18:05 jiffin joined #gluster
18:08 arcolife joined #gluster
18:11 ivan_rossi left #gluster
18:13 calavera joined #gluster
18:27 jiffin joined #gluster
18:28 hchiramm joined #gluster
18:28 shlant joined #gluster
18:28 shlant left #gluster
18:35 carnil JoeJulian: thanks for the reference, will look into it now
18:37 Manikandan joined #gluster
18:43 carnil JoeJulian: so I guess I will need to touch every file once at least
18:47 bennyturns joined #gluster
18:48 jiffin joined #gluster
18:51 emitor gbox, JoeJulian, I've disabled the quota and I don't have the error message any more. The quota was on other path diferent where i was working on, is this a known bug?
18:54 Slashman hello, just a quick question, if I'm in a configuration where I have 2 nodes with "replica 2" and one is on localhost, will my read only operations be done on the local node?
18:57 jiffin Slashman: use cluster.read-subvolume-index to specify the node
18:59 Slashman jiffin: okay found it, I guess I can also just use nfs and it will use the target host all the time, no?
18:59 liviudm left #gluster
19:00 jiffin Slashman: nfs client will target that host, distribution happens from the gluster nfs server
19:00 gbox emitor: Oh good.  You did have quotas.  IDK, I'm holding off on quotas.
19:01 Slashman jiffin: so if I'm using nfs, it may still read from the other node?
19:03 emitor gbox, but the quotas weren't on the path where i had the backups, so I don't know why is this causing me these issues...
19:03 jiffin joined #gluster
19:04 deniszh1 joined #gluster
19:05 emitor anyway thanks for your help gbox, now the backups are running at least. I'll try to find out why is this happening tomorrow.
19:11 Wizek joined #gluster
19:22 calavera joined #gluster
19:32 ahino joined #gluster
19:32 amye joined #gluster
19:41 Wizek joined #gluster
20:04 squizzi joined #gluster
20:12 DV joined #gluster
20:42 haomaiwang joined #gluster
21:29 robb_nl joined #gluster
21:31 F2Knight joined #gluster
21:51 calavera joined #gluster
21:58 calavera joined #gluster
22:07 DV joined #gluster
22:11 atrius joined #gluster
22:32 haomaiwa_ joined #gluster
22:35 hackman joined #gluster
23:05 deniszh1 left #gluster
23:08 deniszh joined #gluster
23:25 deniszh1 joined #gluster
23:26 cpetersen JoeJulian:  do you know of any good generic IT IRC channels to discuss things like IT standards and best practices?
23:26 om joined #gluster
23:36 JoeJulian Not really.
23:37 JoeJulian I thought we, here, were the ones that created those. ;)
23:46 shaunm joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary