Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 nightwalk joined #gluster
00:24 cowmoose joined #gluster
00:27 eightyeight joined #gluster
00:32 cowmoose Hey, I'm new to gluster, I am wondering if it is possible to nest gluster volumes? I have a have a replicated volume that I would like to use in a distributed volume.
00:34 cowmoose would you have to mount the replicated volume to use as a brick for the distributed volume? It seems there should be another way
02:30 raven-np joined #gluster
02:49 bala1 joined #gluster
03:12 bharata joined #gluster
04:02 ajm joined #gluster
04:03 ajm anyone know what the .gluserfs/indices/xattrop directory is for? that dir on one of my gluster nodes has a lot of files in it, doesn't seem quite normal
04:04 cicero gluster uses xattrs to store metadata
04:04 cicero http://community.gluster.org/a/how-glu​sterfs-uses-extended-attributes-xattr/
04:04 glusterbot <http://goo.gl/DsYtP> (at community.gluster.org)
04:05 ajm right, these aren't xattrs on files, this is a specific directory called xattrop (with a very very large number of files in it)
04:05 cicero ah
04:05 cicero then i'm outta my league :P
04:05 ajm :)
04:06 ajm I would say how many files, but find was running for 36+ hours on the directory before I gave up
04:06 cicero eep
04:06 ajm the directory inode is >1gb
04:14 stopbit joined #gluster
04:15 mohankumar joined #gluster
04:22 overclk joined #gluster
04:34 shylesh joined #gluster
04:45 vpshastry joined #gluster
04:46 n8whnp joined #gluster
04:47 sripathi joined #gluster
04:47 hagarth joined #gluster
04:48 FyreFoX semiosis: are you around?
04:53 semiosis maybe
04:56 semiosis FyreFoX:
04:56 sgowda joined #gluster
04:56 hagarth :O
04:56 semiosis happy new year hagarth
04:57 semiosis and FyreFoX & all
04:57 hagarth thanks semiosis! wish you the same too :)
04:57 hagarth happy 2013 to everyone here
05:10 FyreFoX thanks semiosis !
05:10 FyreFoX quick on for you https://bugzilla.redhat.com/show_bug.cgi?id=887098
05:10 glusterbot <http://goo.gl/QjeMP> (at bugzilla.redhat.com)
05:10 glusterbot Bug 887098: urgent, high, ---, vshastry, POST , gluster mount crashes
05:10 FyreFoX any chance that patch can be cherry picked for the next 3.3.1 debs ?
05:10 FyreFoX its the tiniest patch ever ... :)
05:15 semiosis FyreFoX: i like to keep the ppa packages as close to the released source as possible (usually no changes at all)
05:15 semiosis FyreFoX: i've made exceptions so far only where things failed to build
05:15 semiosis but you could build the packages yourself with the patch, and I could answer questions if you have any about the build process
05:22 semiosis FyreFoX: since this looks like a bug in glusterfs (not specific to ubuntu environment) i'd like to see the glusterfs devs backport the patch into the 3.3 series for the 3.3.2 release
05:22 FyreFoX semiosis: ah ok. :( I think building local packages will get entirely too complicated
05:23 ajm its not, honestly
05:23 ajm ubuntu/debian makes it really simple
05:24 semiosis the process is basically this... 1. download ,,(latest) source tarball
05:24 glusterbot The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
05:25 semiosis 2. rename the downloaded tarball to glusterfs_3.3.1.orig.tar.gz, and untar it
05:27 semiosis 3. go to the ppa web site, https://launchpad.net/~semiosis/+arc​hive/ubuntu-glusterfs-3.3/+packages and expand the tree for the version of ubuntu you're using, then download the .debian.tar.gz file from the list
05:27 glusterbot <http://goo.gl/3YP68> (at launchpad.net)
05:28 semiosis 4. untar that tarball inside the glusterfs-3.3.1 source tree you extracted in step 2, it will add a single directory "debian/" to the glusterfs source tree
05:28 semiosis 5. apply the patch to the glusterfs source tree
05:29 semiosis 6. run 'dch' to add a note to the debian/changelog file
05:30 Humble joined #gluster
05:30 semiosis 7. use 'debuild -S' (iirc) to prepare a source package to build
05:31 semiosis 8. those last two commands need to be run from in the root of the glusterfs source, glusterfs-3.3.1/
05:32 semiosis 9. go up a folder, you'll see some new files created, one ending in .dsc, you can use 'pbuilder --build <the-dsc-file>' to build the binary packages
05:32 semiosis you'll need to run 'pbuilder --create' to set up pbuilder first before you can do the --build operation
05:32 semiosis that is rougly the process, iirc
05:32 hagarth semiosis: I think it would be great to wikify this procedure
05:32 semiosis i'm about to sign off for the night
05:33 semiosis hagarth: yeah you're right!
05:33 FyreFoX semiosis: thanks I'll give it a burl :)
05:34 semiosis great, feel free to leave questions for me here & i'll try to follow up with you when i'm online tmrw
05:34 FyreFoX kk thanks
05:34 semiosis yw
05:39 semiosis hagarth: i put this up on a talk page for now (http://www.gluster.org/community/documenta​tion/index.php/Talk:DebianPackagePatching) and i'll clean it up & move it to the 'page' side soon
05:39 glusterbot <http://goo.gl/x69jA> (at www.gluster.org)
05:40 semiosis FyreFoX: your feedback from following the process would be very helpful in cleaning up that page, so please let me know how it goes
05:40 semiosis thanks!
05:40 hagarth semiosis: cool, that looks neat
05:40 FyreFoX np will do
05:41 semiosis ok gotta run, good night all
05:42 FyreFoX good night semiosis
05:42 hagarth 'night semiosis
05:45 mohankumar joined #gluster
06:02 sripathi1 joined #gluster
06:08 hagarth joined #gluster
06:29 bala1 joined #gluster
06:32 vimal joined #gluster
06:35 Venkat1 joined #gluster
06:51 sripathi joined #gluster
06:55 rastar joined #gluster
07:19 hateya joined #gluster
07:20 jtux joined #gluster
07:23 shireesh joined #gluster
07:24 17SACAB1V joined #gluster
07:25 17SACAB1V left #gluster
07:32 hagarth joined #gluster
07:43 sunus joined #gluster
07:45 ngoswami joined #gluster
07:48 cowmoose joined #gluster
07:55 ekuric joined #gluster
08:07 passie joined #gluster
08:08 ctria joined #gluster
08:10 harshpb joined #gluster
08:17 cicero joined #gluster
08:24 puebele joined #gluster
08:36 ramkrsna joined #gluster
08:36 ramkrsna joined #gluster
08:45 xavih joined #gluster
09:15 manik joined #gluster
09:23 Lejmr joined #gluster
09:25 shireesh joined #gluster
09:32 Norky joined #gluster
09:38 ngoswami joined #gluster
09:40 eightyeight joined #gluster
09:44 DaveS joined #gluster
10:21 nullck joined #gluster
10:31 sripathi1 joined #gluster
10:36 vpshastry joined #gluster
10:37 sahina joined #gluster
11:09 16SAASWY1 joined #gluster
11:14 mohankumar joined #gluster
11:34 hateya joined #gluster
11:40 vpshastry joined #gluster
11:44 vimal joined #gluster
11:49 manik joined #gluster
11:50 duerF joined #gluster
11:56 rastar joined #gluster
12:10 ctria joined #gluster
12:14 hagarth joined #gluster
12:17 mohankumar joined #gluster
12:28 ddp23_ joined #gluster
12:29 ddp23_ hi, gluster newb: I have 6 servers / bricks. I'd like to stripe data across 3 and replicate to the other 3, but I'd like to be able to expand this setup in multiples of 2 additional servers. Is that possible and, if so, what does the volume create look like?! Thanks!
12:30 ddp23_ gluster version is 3.2.5-1ubuntu1 if that makes a difference...
12:44 ddp23_ ah, never mind, just read: http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ I didn't actually need striped after all :)
12:44 glusterbot <http://goo.gl/5ohqd> (at joejulian.name)
12:45 ddp23_ left #gluster
13:16 veselin joined #gluster
13:18 veselin Hello all, would you expect a noticeable performance/other type of issue if using 2 bricks based off 2 iSCSI LUNs but off the same iSCSI san server?
13:21 veselin I need the shared filesystem and thought that RHEL Cluster+GFS2 would be too much for only 2 nodes.
13:24 Gugge why do you need a cluster filesystem if you still have the san as single point of failure?
13:25 Gugge But to answer your question, it should work fine :)
13:28 veselin Gugge: thank you, the filesystem is to be accessed from the 2 nodes at the same time
13:29 veselin Gugge: i.e load-balanced ftp servers
13:38 aliguori joined #gluster
13:42 Gugge veselin: NFS is fine for accessing something from multiple machines :)
13:45 rwheeler joined #gluster
13:45 Staples84 joined #gluster
13:49 veselin Gugge: true, however my thinking was that if you have single NFS server exporting the iSCSI LUN, if either the NFS host or the SAN goes down you lose access. Whereas if you had 2 gluster nodes, then its mostly the SAN that you should worry about.
13:51 Gugge Sure
13:55 hagarth joined #gluster
13:56 jtux joined #gluster
13:57 chirino joined #gluster
14:18 hagarth joined #gluster
14:23 passie joined #gluster
14:25 bennyturns joined #gluster
14:29 jtux joined #gluster
14:38 manik joined #gluster
14:40 plarsen joined #gluster
15:03 harshpb joined #gluster
15:11 raven-np joined #gluster
15:11 Alpinist joined #gluster
15:20 plarsen joined #gluster
15:21 wushudoin joined #gluster
15:24 wushudoin joined #gluster
15:38 raven-np joined #gluster
15:51 jbrooks joined #gluster
15:52 lh joined #gluster
15:52 lh joined #gluster
16:14 zaitcev joined #gluster
16:25 tc00per joined #gluster
16:32 glusterbot New news from newglusterbugs: [Bug 890509] kernel compile goes into infinite loop <http://goo.gl/ax3HE>
16:36 manik joined #gluster
16:45 cicero eep
16:50 Mo__ joined #gluster
16:51 hagarth joined #gluster
16:51 JoeJulian cicero: If that "eep" was about that new bug report, notice that it's tagged for mainline so that's not in production releases.
17:02 semiosis happy new year JoeJulian, cicero, all
17:05 johnmark semiosis: ^^^^^^ ditto
17:06 semiosis and to you as well johnmark
17:06 johnmark :)
17:07 copec Good morning...So I have Glusterfs 3.3.0 compiling on Solaris 11.1, with some minimal modifications, but the daemon wont start...http://pastebin.com/rYaBmeDJ
17:07 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
17:07 copec no matter if I set rdma,socket or tcp for the management volume, it seems this could be related to the problem:
17:07 copec [2013-01-02 09:53:23.885650] E [socket.c:333:__socket_server_bind] 0-tcp.management: binding to  failed: Address already in use
17:08 copec googling seems to bring up some people who have had the same issue
17:08 copec If anyone could point me in a direction, I would sure appreciate it
17:18 manik joined #gluster
17:31 cicero copec: i don't know if lsof is available on solaris, but can you see what's currently bound to that port?
17:32 cicero copec: lsof -i
17:34 JoeJulian semiosis: Hope yours has been happy so far as well. :D
17:35 cicero JoeJulian: yes it was an eep towards infinite loops. good to know it's not in production :)
17:35 cicero semiosis: you too!
17:40 copec cicero,  it is not, but there are other ways, and there is nothing bound to that port, or unix socket
17:40 cicero :(
17:41 copec It is peculiar that there is a space there instead of an ip or socket listed
17:41 cicero ah
17:41 cicero i thought you scrubbed that :P
17:41 copec heh
17:43 cicero this one thread about volume name failure http://comments.gmane.org/gmane.co​mp.file-systems.gluster.devel/2643 mentions trying to compile it in 32-bit instead
17:43 glusterbot <http://goo.gl/uAzKK> (at comments.gmane.org)
17:43 * cicero shrugs
17:44 copec I'
17:44 copec I'm on x86-64 Solaris 11.1
17:45 JoeJulian That space instead of hostname has gotten me before, too. I think I decided that it's irrelevant.
17:45 copec I'm not sure if it was a red herring or not
17:46 JoeJulian Gah! I hate that pastebin.com. :(
17:46 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
17:47 copec sorry
17:47 copec let me change it
17:48 copec http://dpaste.org/cy1iX/
17:48 glusterbot Title: dpaste.de: Snippet #215932 (at dpaste.org)
17:48 JoeJulian thanks
17:52 JoeJulian find -name socket.c
17:52 JoeJulian gah
17:59 nueces joined #gluster
18:00 hagarth joined #gluster
18:00 JoeJulian copec: Well I can't see anything obvious. Maybe try running an strace and see if anything jumps out at that line.
18:01 copec okay
18:10 kkeithley JoeJulian: are you running anything on 32-bit hardware?
18:10 kkeithley servers that is
18:11 kkeithley semiosis: how about you, any 32-bit servers?
18:11 kkeithley Or know of anyone using 32-bit servers?
18:11 * semiosis checks
18:12 semiosis kkeithley: yes i am currently, but i could replace these machines with 64-bit any minute
18:13 hateya joined #gluster
18:13 semiosis ec2 added support for 64 bit on all instance types a few months back
18:13 semiosis i just haven't replaced all my instances yet
18:13 semiosis looks like i have two still on 32 bit
18:14 semiosis kkeithley: why?
18:14 kkeithley indeed. Are the clients using native or NFS to mount?
18:14 semiosis native all the wy
18:14 semiosis way
18:14 semiosis oh and i should mention... 3.1.7 :)
18:15 kkeithley cool. I don't think 3.1.x matters for the question I'm answering. I don't want to tell a lie about 32-bit servers in production.
18:15 semiosis heh
18:16 semiosis yeah as the story goes, both JoeJulian & myself got into the packaging business becuase we needed i386 builds of glusterfs when none were available
18:16 semiosis so definitely 32-bit in production for both of us for a while
18:16 kkeithley excellent
18:17 semiosis oh wait... servers?  my servers are all 64 bit
18:17 semiosis i have 32 bit clients
18:17 kkeithley Did you have 32-bit servers at any point?
18:18 semiosis not me, but JoeJulian might have, idk
18:24 JoeJulian I did
18:25 JoeJulian I don't anymore though.
18:25 jbrooks joined #gluster
18:25 JoeJulian kkeithley: ^
18:26 kkeithley Thanks, just knowing you did means I won't be telling a lie when I say it's been done
18:27 kkeithley in production
18:27 JoeJulian I do everything in production. ;)
18:28 semiosis ,,(joes performance metric)
18:28 glusterbot I do not know about 'joes performance metric', but I do know about these similar topics: 'Joe's performance metric'
18:28 semiosis ,,(joe's performance metric)
18:28 glusterbot nobody complains.
18:34 harshpb joined #gluster
18:41 harshpb joined #gluster
18:44 harshpb joined #gluster
18:48 harshpb joined #gluster
18:49 harshpb joined #gluster
18:51 harshpb joined #gluster
18:54 harshpb joined #gluster
18:56 harshpb joined #gluster
18:57 sjoeboo joined #gluster
19:12 sjoeboo so, i've got some geo-replication questions....
19:12 sjoeboo first, ssh is required? you can't just do straight gluster -> gluster ?
19:16 harshpb joined #gluster
19:20 sjoeboo i'm basically looking for a good example of a glusterfs->glusterfs geo-rep setup...
19:24 harshpb joined #gluster
19:25 bugs_ joined #gluster
19:26 harshpb joined #gluster
19:28 harshpb joined #gluster
19:28 harshpb joined #gluster
19:29 harshpb joined #gluster
19:32 m0zes sjoeboo: you can do gluster -> gluster: gluster volume geo-replication <MASTERVOL> gluster://<slavehost>:<slavevol> start
19:32 m0zes iirc
19:33 harshpb joined #gluster
19:38 sjoeboo how does one clear all geo-replication config?/var/lib/glusterd/g​eo-replication/gsyncd.conf ?
19:38 sjoeboo also, when setting up geo-rep to a gluster volume as a slave...can/should you use a RRdns name, or pick a specific slave node?
19:42 m0zes I just picked a slave node, it doesn't /really/ matter as gluster will talk to all the slaves via the normal gluster protocol in that instance. the slave node you picked would have to online to download the .vol file.
19:43 sjoeboo okay, cool, sounds reasonable then
19:43 sjoeboo any idea how to kill/unconfigure all geo replication ?
19:43 sjoeboo want a fresh start
19:45 wushudoin joined #gluster
19:45 m0zes sjoeboo: not sure. did you set any specific options? do you have geo-rep started between a master and slave?
19:46 sjoeboo i had attempted to set the gsyncd location, the default it was suing wasn't right, but, i'm still seeing in the geo-repo logs its trying the old location
19:46 sjoeboo just want to stop it all together, it never started working
19:47 kkeithley gluster volume geo-rep $vol $slave stop
19:47 kkeithley that didn't work?
19:49 sjoeboo tried that, still seeing things running, at least from the geo-rep logs.
19:49 kkeithley hmm, okay
19:53 sjoeboo yeah, i'm going through each possible slave (its a 5 entry round-robin i initiall used for a slave target), trying to stop, but a few keep poping back up
19:54 harshpb joined #gluster
19:55 hateya joined #gluster
19:58 sjoeboo so, logs are getting named like:
19:58 sjoeboo ssh%3A%2F%2Froot%4010.242.64.121%​3Afile%3A%2F%2F%2Fgstore-rep.log
19:58 harshpb joined #gluster
19:58 sjoeboo which would roughly be: ssh://root@10.242.64.121:file:/// ?
20:00 sjoeboo but i'm yet to find a way to stop that sync attempt fro waking up.
20:00 sjoeboo there isn't a  "disable all geosync for this volume" option ?
20:04 kkeithley joined #gluster
20:05 harshpb joined #gluster
20:08 harshpb joined #gluster
20:08 m0zes gluster volume geo-replication <volume> status, then gluster volume geo-replication <volume> <slave> stop; where the slave is the exact string provided as the slave from the status command
20:09 m0zes in theory... I've never had that issue, though.
20:10 sjoeboo hm, well, i can see one job there, says its okay, but when i go to stop it, i get told there are no active sessions, and the command failed
20:11 m0zes weird.
20:14 sjoeboo yeah
20:14 sjoeboo and the ssh based one i still see trying to fire off, but they are not even listed as configured
20:17 duffrecords joined #gluster
20:18 JoeJulian look for "glusterfs" processes that look like they might have something to do with geosync. I don't know for sure if there should be one, but that's what I would suspect. Seems like if you stopped geosync there shouldn't be.
20:27 semiosis ,,(processes)
20:27 glusterbot the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more information.
20:27 semiosis see link
20:31 JoeJulian Oh yeah, right... And there is a glusterfs process that associates with that as well. It's kind-of sneaky in that it mounts the volume, attaches the process, then lazy unmounts so the mount doesn't show in the mount list but the process still has access to it (iirc).
20:33 semiosis JoeJulian: did you see my chats about znc re: excess flood?
20:34 semiosis to make a long story short, their 1.0 release adds flood protection
20:35 JoeJulian Yes, I plan on checking mine after I figure out why our cobol programmer is the only person that can't seem to establish a vpn connection. <grumble>
20:37 m0zes o_O cobol?
20:41 JoeJulian ... yeah... It's becoming more and more of a pain in my ass.
20:42 m0zes it is the language that will never die.
20:43 JoeJulian I guess fortran's making a comeback as well.
20:43 m0zes unfortunately. my users use that *all* the time.
21:08 JuanBre joined #gluster
21:11 JuanBre Hy, Im trying to identify the performance bottleneck in a gluster volume. I have four servers, and the volume has 2 replicas and 2 stripe....
21:11 JuanBre one of the servers (that also acts as client) is connected to the network at 10gps...and the other ones only at 1gbps...
21:14 JuanBre running a performance test on the mounted volume I get around 100Mbps...
21:15 JuanBre Plotting the network, processor, ram and disk usage of every server doesnot show where is the bottleneck....
21:58 JoeJulian JuanBre: Have you confirmed it's not a network issue (iperf?)
22:00 JoeJulian Also, what is the system load you're testing against?
22:06 jk007 joined #gluster
22:08 adrian15|2 joined #gluster
22:09 adrian15|2 So if I had a server with nfs shares... and now I want just to export them but in default port thanks to GlusterFS without using any of the GlusterFS capabilities... Is there any way of doing that ? You know... just as a normal nfs server.
22:09 JoeJulian Uninstall glusterfs?
22:10 semiosis nfs.disable ?
22:10 adrian15|2 JoeJulian: Without installng it.
22:10 JoeJulian Oh! I see
22:10 JoeJulian You want to nfs export something OTHER than glusterfs on the glusterfs server.
22:11 adrian15|2 JoeJulian: That's it. So I was wondering if there was a "non-gluster" mode of working of glusterfs or something.
22:12 JoeJulian like semiosis suggested, set nfs.disable on. That will prevent gluster from trying to export that volume. Set that for all your volumes, make sure that the glusterfs process that provides nfs isn't running, then you should be able to start the kernel nfs daemon.
22:12 JoeJulian DON'T export the bricks.
22:14 adrian15|2 JoeJulian: That's not what I want :). I want to be able to export Glusterfs volumes as NFS and normal directorios as NFS too. If there isn't an standard I'll try with a replicated volume which will only have one node.
22:14 adrian15|2 *directories
22:16 JoeJulian Ok, it's /theoretically/ possible to run both nfs servers, but I've never figured out how to do it.
22:16 adrian15|2 JoeJulian: Do you mean in the same port ?
22:18 JoeJulian I have no idea. I just know that I've been told there should be a way to do it. It sounds interesting so I'd like to figure out how, but I have no practical use for that myself so it hasn't hit the top of my priority list.
22:19 adrian15|2 JoeJulian: Ok. I'll go into the 1-replicated way. Thank you.
22:19 duffrecords I set up a new Gluster installation last week and so far the transfer speed is pitifully slow (~700 KB/s).  I see tons of "failed to get the 'linkto' xattr No data available" errors in the logs.  what does that mean?
22:22 JoeJulian duffrecords: Did you rsync the data onto the volume?
22:22 duffrecords yeah, onto the volume, not onto the brick
22:22 JoeJulian Did you use --inplace?
22:22 JoeJulian no
22:23 duffrecords nope
22:23 duffrecords just -arSv
22:24 JoeJulian What happens is that rsync creates a tempfile, copies everything into it, then renames that tempfile. This causes the dht hash to be calculated for the tempfile when placing the file. Once it's renamed, the hash is different and the file has to be found among the entire distribute set. See http://joejulian.name/blog​/dht-misses-are-expensive/
22:24 glusterbot <http://goo.gl/A3mCk> (at joejulian.name)
22:24 JoeJulian You can fix that with a rebalance.
22:25 JoeJulian That's why --inplace should always be used with rsync on gluster volumes.
22:27 duffrecords thanks.  I'll give —inplace a try
22:29 duffrecords would an ordinary "cp -ar" be a better choice for restoring huge amounts of data into an empty volume?  I know rsync is better suited for synchronizing two mostly-similar directory trees
22:30 JoeJulian I prefer rsync if for no other reason than I like the directory specification syntax.
22:36 duffrecords ooh… sparse cannot be used with inplace
22:37 duffrecords what would you recommend as the fastest way to move multi-gigabyte virtual disk images to and from the volume?
22:38 JuanBre JoeJulian: yes...I am monitoring the network bandwidth usage all the time, and the maximum I get is 65MBps
22:39 JuanBre JoeJulian: iperfs shows 980Mbps
22:41 duffrecords JoeJulian: —inplace made a big difference.  files are transferring at 25 MB/s instead of 700 KB/s
22:45 duffrecords however, running a parallel rsync eats into that 25 MB/s bandwidth
22:49 lh joined #gluster
22:49 lh joined #gluster
23:17 hattenator joined #gluster
23:21 Gilbs joined #gluster
23:28 Gilbs Afternoon all -- Is there any issue with deleting the mnt-*.log files in /var/log/glusterfs?  I have a full / and those lot files are my biggest pain.  Will they be recreated?
23:29 JoeJulian Most of use use logrotate with copytruncate.
23:30 Gilbs ahh, i see thanks.
23:33 raven-np joined #gluster
23:34 jk007 left #gluster
23:47 lh joined #gluster
23:47 lh joined #gluster
23:57 Gilbs left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary