Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 sashko a2_: timeout values are in secs by default?
00:01 a2_ in secs always
00:02 sashko ok
00:04 RobertLaptop joined #gluster
00:06 dbruhn joined #gluster
00:19 jporterfield joined #gluster
00:50 awheeler joined #gluster
01:00 DataBeaver joined #gluster
01:12 badone joined #gluster
01:16 atrius joined #gluster
01:17 StarBeast joined #gluster
01:38 davinder joined #gluster
01:43 awheeler joined #gluster
01:44 anands joined #gluster
02:02 awheeler joined #gluster
02:11 jporterfield joined #gluster
02:14 anands joined #gluster
02:19 bala joined #gluster
02:25 MarcinEF joined #gluster
02:26 awheeler joined #gluster
02:26 awheeler joined #gluster
02:28 StarBeast joined #gluster
02:30 sgowda joined #gluster
02:40 SteveWatt joined #gluster
02:40 harish joined #gluster
02:40 SteveWatt Hey Folks - Can anyone tell me what version of glusterfs is used for Red Hat Storage 2.0.5 ?
02:44 hagarth SteveWatt: RHS 2.0 is based on GlusterFS 3.3
02:44 SteveWatt so would that be 3.3.2 ?
02:45 hagarth SteveWatt: a custom downstream version :)
02:45 asias joined #gluster
02:45 SteveWatt ok. Thanks hagarth !
02:45 SteveWatt * last min prep for Gluster Community Day *
02:46 SteveWatt :)
02:46 hagarth SteveWatt: good luck with that :)
02:46 SteveWatt thanks
02:53 saurabh joined #gluster
02:54 jporterfield joined #gluster
03:05 glusterbot New news from newglusterbugs: [Bug 1001418] Upgrade from RHS2.0-U5 to U6 results in broken gluster-swift services, it gives 503 for every request <http://goo.gl/UadvX5>
03:07 SteveWatt left #gluster
03:09 awheeler joined #gluster
03:14 lyang0 joined #gluster
03:21 shubhendu joined #gluster
03:21 bharata-rao joined #gluster
03:27 lyang0 joined #gluster
03:31 shylesh joined #gluster
03:38 asias joined #gluster
03:48 atrius joined #gluster
03:50 bulde joined #gluster
03:58 sgowda joined #gluster
04:01 anands joined #gluster
04:04 ppai joined #gluster
04:08 MarcinEF Hello, anyone tried mount gluster with fuse on FreeBSD?
04:10 jporterfield joined #gluster
04:11 RameshN joined #gluster
04:12 nightwalk joined #gluster
04:14 ngoswami joined #gluster
04:16 dusmant joined #gluster
04:23 jporterfield joined #gluster
04:23 mohankumar__ joined #gluster
04:29 bulde joined #gluster
04:35 ndarshan joined #gluster
04:40 shruti joined #gluster
04:41 ajha joined #gluster
04:42 psharma joined #gluster
04:49 spandit joined #gluster
04:50 sahina joined #gluster
04:50 aravindavk joined #gluster
04:53 atrius joined #gluster
05:09 jporterfield joined #gluster
05:09 nshaikh joined #gluster
05:11 sgowda joined #gluster
05:16 syntheti_ joined #gluster
05:21 satheesh joined #gluster
05:25 davinder2 joined #gluster
05:25 codex joined #gluster
05:29 lyang0 left #gluster
05:30 satheesh1 joined #gluster
05:31 lyang0 joined #gluster
05:33 lalatenduM joined #gluster
05:33 shireesh joined #gluster
05:33 vpshastry joined #gluster
05:34 lalatenduM joined #gluster
05:34 mohankumar__ sgowda: ping
05:35 mohankumar__ i am looking for fuse lookup releated help, whom i have to contact?
05:35 hagarth joined #gluster
05:40 sgowda mohankumar: you could fire away here, or send a mail across to devel mailing list
05:43 raghu joined #gluster
05:44 mohankumar__ sgowda: here or #gluster-dev ?
05:45 bulde joined #gluster
05:46 sgowda #gluster-dev
05:48 codex joined #gluster
05:49 rjoseph joined #gluster
05:50 vshankar joined #gluster
06:09 ricky-ticky joined #gluster
06:09 bala joined #gluster
06:10 glusterbot New news from resolvedglusterbugs: [Bug 968301] improvement in log message for self-heal failure on file/dir in fuse mount logs <http://goo.gl/GI3SX>
06:12 shireesh joined #gluster
06:17 dusmant joined #gluster
06:20 jtux joined #gluster
06:21 rgustafs joined #gluster
06:23 bala joined #gluster
06:24 vimal joined #gluster
06:38 aravindavk joined #gluster
06:41 kanagaraj joined #gluster
06:43 davinder joined #gluster
06:44 syntheti_ joined #gluster
06:47 guigui1 joined #gluster
06:51 rastar joined #gluster
06:51 badone joined #gluster
06:52 ctria joined #gluster
06:53 davinder joined #gluster
06:53 eseyman joined #gluster
07:05 tziOm joined #gluster
07:05 satheesh joined #gluster
07:05 jtux joined #gluster
07:05 ndarshan joined #gluster
07:13 jmsa joined #gluster
07:18 kanagaraj joined #gluster
07:21 ngoswami joined #gluster
07:28 ricky-ticky joined #gluster
07:31 JoeJulian file a bug
07:31 glusterbot http://goo.gl/UUuCq
07:32 dusmant joined #gluster
07:35 hagarth @channelstats
07:35 glusterbot hagarth: On #gluster there have been 175047 messages, containing 7388159 characters, 1234318 words, 4933 smileys, and 657 frowns; 1077 of those messages were ACTIONs. There have been 67295 joins, 2095 parts, 65179 quits, 21 kicks, 165 mode changes, and 7 topic changes. There are currently 229 users and the channel has peaked at 229 users.
07:42 jporterfield joined #gluster
07:45 syntheti_ joined #gluster
07:49 jcsp joined #gluster
07:55 ricky-ticky joined #gluster
08:06 glusterbot New news from newglusterbugs: [Bug 1001502] Split brain not detected in a replica 3 volume <http://goo.gl/YdVQqE>
08:08 abyss^ I have one volume with 4 bricks. I'd like to add quota on specified directory in this volume. It's possible? I mean when I do gluster volume quota my_volumen limit-usage /my/specified/directory it's add only quota on that one directory or whole volumen?
08:09 JoeJulian that one directory
08:09 JoeJulian you can ignore the fact that there are bricks behind the volume. The quote will exist on that volume for the specific directory you choose.
08:10 JoeJulian s/quote/quota/
08:10 glusterbot What JoeJulian meant to say was: you can ignore the fact that there are bricks behind the volume. The quota will exist on that volume for the specific directory you choose.
08:10 ninkotech_ joined #gluster
08:10 abyss^ Ok. It's quite logical but I better assure myself;) Thank you JoeJulian
08:10 JoeJulian You're welcome. :D
08:12 ninkotech joined #gluster
08:14 ninkotech_ joined #gluster
08:28 manik joined #gluster
08:36 Elendrys hi
08:36 glusterbot Elendrys: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:36 Elendrys can someone help me to know why i have geo-replication restarting every 5min on a volume ?
08:38 JoeJulian Because you enabled it?
08:39 Elendrys how do i check it ?
08:39 JoeJulian gluster volume info
08:39 Elendrys Volume Name: AsynchroneDeleg
08:39 Elendrys Type: Distribute
08:39 Elendrys Volume ID: 3b6da4da-696d-45ae-9ab3-777e29725cf6
08:39 Elendrys Status: Started
08:39 Elendrys Number of Bricks: 1
08:39 Elendrys Transport-type: tcp
08:39 Elendrys Bricks:
08:39 Elendrys Brick1: orque-deleg:/home/VolumesGluster/AsynchroneDeleg
08:39 Elendrys Options Reconfigured:
08:39 Elendrys geo-replication.indexing: on
08:39 Elendrys nfs.export-volumes: off
08:40 JoeJulian Please don't paste in channel. It makes stuff scroll off the history for others. Use fpaste.org
08:40 Elendrys sorry
08:41 StarBeast joined #gluster
08:41 atrius joined #gluster
08:43 mohankumar joined #gluster
08:45 JoeJulian Ah, "gluster volume geo-replication status"
08:46 Elendrys status is ok, but show faulty when its restarting
08:47 Elendrys the state change from ok to faulty frequently
08:48 JoeJulian I'm off to bed. It's almost 2am here. Check your logs on the master and the slave. There's bound to be some clue in there. Goodnight.
08:49 Elendrys ok goodnight
08:51 Elendrys thanks
08:57 spider_fingers joined #gluster
08:57 edward1 joined #gluster
09:16 syntheti_ joined #gluster
09:17 duerF joined #gluster
09:24 ricky-ticky joined #gluster
09:31 pkoro joined #gluster
09:32 psharma joined #gluster
09:41 ricky-ticky joined #gluster
09:45 manik joined #gluster
09:45 ngoswami joined #gluster
09:47 syntheti_ joined #gluster
09:53 bharata-rao joined #gluster
10:18 syntheti_ joined #gluster
10:19 mmalesa joined #gluster
10:22 bharata-rao joined #gluster
10:40 kkeithley1 joined #gluster
10:44 mika joined #gluster
10:45 dusmant joined #gluster
10:49 syntheti_ joined #gluster
10:49 mika hi, i'm running glusterfs 3.2.7 on debian/wheezy (64bit) with 2 nodes, after running '/etc/init.d/glusterfs-server restart ; mount $VOLUME' the servers immediately freeze, sadly it's a remote system at a customer so debugging isn't really easy as freeze->reboot cycles are quite longish, any ideas where to start best digging into this issue?
10:50 purpleidea joined #gluster
10:50 purpleidea joined #gluster
10:51 hagarth joined #gluster
10:58 andreask joined #gluster
11:02 mbukatov joined #gluster
11:04 manik joined #gluster
11:07 glusterbot New news from newglusterbugs: [Bug 1001585] glusterd loses connection with the bricks <http://goo.gl/FKY5ZN>
11:14 failshell joined #gluster
11:20 ninkotech joined #gluster
11:20 syntheti_ joined #gluster
11:26 vmos joined #gluster
11:28 vmos hello, got a quick question. i've got a gluster setup with 4 nodes (2 distributed and 2 mirrored) i want to drop the distributed so it's just 2 nodes mirrored, can I do that without ditching the whole thing and rebuilding from scratch?
11:31 vmos you know what, screw it, there's not that much data, just going to bin it and redo
11:36 ndarshan joined #gluster
11:37 aravindavk joined #gluster
11:51 syntheti_ joined #gluster
11:53 vmos um, any way to remove bricks after you've removed the volume they're part of?
12:00 kkeithley1 @help
12:00 glusterbot kkeithley1: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin.
12:01 andreask joined #gluster
12:02 kkeithley_ If you used a subdir on the brick you can just rm -rf $subdir; mkdir $subdir.  If not, you can remove all the hidden files and xattrs or the easy way is to redo the mkfs
12:03 hagarth joined #gluster
12:04 dusmant joined #gluster
12:05 manik joined #gluster
12:07 jclift joined #gluster
12:08 vmos kkeithley_: thanks, didn't quite understand you but i added "attributes" to my googling, found this
12:08 vmos http://linuxsysadm.wordpress.com/2013​/05/16/glusterfs-remove-extended-attr​ibutes-to-completely-remove-bricks/
12:08 glusterbot <http://goo.gl/eFTf5u> (at linuxsysadm.wordpress.com)
12:08 vmos works nice!
12:12 B21956 joined #gluster
12:16 kkeithley_ @info
12:16 glusterbot kkeithley_: Error: The command "info" is available in the Factoids, MessageParser, and RSS plugins. Please specify the plugin whose command you wish to call by using its name as a command before "info".
12:16 kkeithley_ @show
12:16 glusterbot kkeithley_: (show [<channel>|global] [--id] <regexp>) -- Looks up the value of <regexp> in the triggers database. <channel> is only necessary if the message isn't sent in the channel itself. If option --id specified, will retrieve by regexp id, not content.
12:22 syntheti_ joined #gluster
12:24 jporterfield joined #gluster
12:24 delhage joined #gluster
12:26 rastar_ joined #gluster
12:27 bulde joined #gluster
12:43 satheesh joined #gluster
12:46 rwheeler joined #gluster
12:49 ngoswami joined #gluster
12:52 robo joined #gluster
12:53 syntheti_ joined #gluster
12:53 awheeler joined #gluster
12:53 mmalesa_ joined #gluster
12:53 dewey joined #gluster
12:54 awheeler joined #gluster
13:01 harish joined #gluster
13:03 eseyman joined #gluster
13:10 social hmm gluster geo-replication shows some abnormal amount of ctx switches :/
13:12 aravindavk joined #gluster
13:15 badone joined #gluster
13:17 Arco joined #gluster
13:18 Arco Hello!
13:18 glusterbot Arco: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:21 robos joined #gluster
13:21 Arco Hello, is the virtual apliance still available? i want to test it on 4 vmware esxi nodes for VDI
13:25 bala joined #gluster
13:27 aliguori joined #gluster
13:27 bala joined #gluster
13:28 nightwalk joined #gluster
13:28 Arco or do i need to install centos and configure it manual?
13:35 [o__o] left #gluster
13:37 [o__o] joined #gluster
13:37 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
13:38 [o__o] left #gluster
13:40 [o__o] joined #gluster
13:41 ndevos Arco: recent versions are not available as VM images, it's better to install manually
13:44 guigui joined #gluster
13:45 rgustafs joined #gluster
13:47 jclift joined #gluster
13:50 kaptk2 joined #gluster
13:54 satheesh1 joined #gluster
13:54 Elendrys hi ndevos
13:54 foexle joined #gluster
13:55 Elendrys i did the change in my config but it didn't fix my "peer rejected" error
13:57 foexle heyho guys, i've a strange issue. I see with 'gluster volume quota xxx list' a wrong usage size .... i've a replicated volume and on all bricks are the size correct. I can't find anything via google and i can't see any errors in my log files
13:57 foexle do anyone had a similar issue ?
13:58 ndevos Elendrys: you could stop all your glusterd processes and update the status value in /var/lib/glusterd/glusterd.info - but no idea if that would fix it
13:58 ndevos Elendrys: without knowning what causes it (connection troubles?) it will be trial+error
14:00 Elendrys i got it back
14:00 Elendrys i did a backup of /var/lib/glusterd before
14:01 Elendrys on both node
14:01 Guest53741 joined #gluster
14:02 Elendrys i maybe have a connection issue as i setup new VM with the same version and config and found something strange
14:04 zaitcev joined #gluster
14:04 ndevos Elendrys: the other ugly way that I would try is, stop all glusterd, backup .../glusterd/vols, wipe .../glusterd, start glusterd, peer probe, kill all glusterd, restore .../glusterd/vols, start glusterd
14:04 Elendrys when i do a gluster volume status on my testing machines, gluster outputs both local and remote status, but on the production, it shows only local
14:05 ndevos Elendrys: that really sounds as connection issues
14:06 bennyturns joined #gluster
14:06 Elendrys ndevos: i asked few month ago about a replica healing issue and got no answer. I think i will plan a complete cleanup of my volumes
14:08 Elendrys ndevos: yes but i've just figured it out.. but replicas or geo-replicated volumes behavior looks ok as files are duplicated as it shall.
14:10 premera joined #gluster
14:11 ndevos Elendrys: well, I think its good as long as its working... the state still sounds a little strange to me, but well, don't break it :)
14:11 glusterbot New news from resolvedglusterbugs: [Bug 880157] libgfapi - samba integration <http://goo.gl/GkTQw>
14:13 Elendrys ndevos: i dont know my logs are messy because i still have this self-heal issue and i dont' know if there's a way to fix it without destroying and recreating the replicated volume
14:19 spider_fingers left #gluster
14:19 Elendrys ndevos: and i'm not 200% sure about the success of upgrading from 3.3.0 to 3.3.2 or 3.4
14:22 robos joined #gluster
14:22 vmos lo, is it possible to allow access to gluster by IP, without using the entire /24? something like this? gluster volume set volume1 auth.allow 172.1.1.128/26
14:23 vmos can it be done? am I just using the wrong syntax?
14:27 vmos alternatively, can I use the password auth if I'm connecting with the nfs client?
14:28 Arco Hello, ok i install it manualy is gluster good for VDI in the box solution?
14:29 plarsen joined #gluster
14:30 saurabh joined #gluster
14:31 vpshastry joined #gluster
14:32 rocking7771 joined #gluster
14:32 rocking7771 hello room
14:32 rocking7771 i am setting up a failover server and need my data (file system) to be in sync
14:33 rocking7771 so i am assuming gluster is right option...
14:33 rocking7771 is it so
14:33 vmos depends on your workload I guess, what's the filesystem for?
14:35 neofob it would be nice if glusterfs supports fallocate
14:36 rocking7771 it contains the code and user uploaded files
14:36 vmos just a filestore? not a webserver or something?
14:37 rocking7771 apache webserver having code and uploaded files
14:37 rocking7771 its a VM
14:37 rocking7771 but i will setup the webservers in my other server manually.. my only thing is how to sync code and files which get changed often
14:38 vmos should be grand then. Now i'm no expert but the research I've been doing the past few days suggests that for a webserver you're better off using an nfs client to connect to a gluster node than the gluster client
14:38 vmos although, using gluster for the sync should be fine
14:39 rocking7771 i can also do the same with rsync as well, right. any reasons of why gluster?
14:39 vmos if you're using nfs client, you may also want to use keepalived with a virtual ip (at least that's what I'm doing)
14:39 LoudNoises if you're running php code off your volume, you should be aware there are some performance issues
14:39 vmos two way mirroring, automatic failover
14:39 vmos at least, that's why I'm using it
14:40 dustin1 joined #gluster
14:40 rocking7771 vmos.. oh okay.. so it provides 2 way mirroring
14:41 Arco can glusterFS handle VDI solution? 4 ESXi servers and each server host 100VM's and has 16x ssd connected to an LSI card and i want to passtrough it to an vm an install centos & glusterFS on it then share it with ISCSI or NFS to the esxi server. is this an good solution?
14:41 saurabh joined #gluster
14:42 bugs_ joined #gluster
14:46 daMaestro joined #gluster
14:47 jdarcy joined #gluster
14:50 rocking7771 left #gluster
14:56 jtux joined #gluster
15:00 chirino joined #gluster
15:03 zerick joined #gluster
15:10 MrNaviPacho joined #gluster
15:14 robos joined #gluster
15:15 purpleidea joined #gluster
15:15 purpleidea joined #gluster
15:22 bala joined #gluster
15:27 andreask joined #gluster
15:31 MrNaviPacho joined #gluster
15:32 vpshastry left #gluster
15:37 mmalesa joined #gluster
15:48 mmalesa_ joined #gluster
15:57 spandit joined #gluster
16:05 grumpy_lubko joined #gluster
16:05 social joined #gluster
16:14 dustin1 left #gluster
16:22 vpshastry joined #gluster
16:25 vpshastry left #gluster
16:30 bulde joined #gluster
16:35 devoid joined #gluster
16:37 devoid hi folks, where are volume options set in 3.4?
16:38 Mo_ joined #gluster
16:39 bulde joined #gluster
16:39 devoid also are the options listed here: http://gluster.org/community/documentation/i​ndex.php/Gluster_3.2:_Setting_Volume_Options still accurate?
16:39 glusterbot <http://goo.gl/dPFAf> (at gluster.org)
16:44 ndevos devoid: you can start with 'gluster volume set help', manually editing .vol files is not done anymore (only on extreme rare exceptions)
16:47 devoid ndevos: thanks. Is there a way to query the current configuration settings? I was expecting something ZFS-like "gluster volume get <VOLNAME> <KEY | all>"
16:48 hagarth joined #gluster
16:48 toad joined #gluster
16:49 ndevos devoid: 'gluster volume info $VOLUME' lists the non-default values
16:51 hagarth joined #gluster
17:03 ninja76 joined #gluster
17:06 JoeJulian Elendrys: Some of the problems you're describing were fixed in 3.3.1
17:06 efries joined #gluster
17:07 JoeJulian vmos: much to my disappointment, auth.allow (or anything else that takes IP ranges) still does not accept cidr notation, only globs.
17:10 hagarth joined #gluster
17:11 JoeJulian vmos: To change your perspective a little, you wouldn't be using glusterfs to provide a service to mirror your content to multiple web servers; glusterfs provides a single unified volume that can be accessed by multiple clients using standard posix semantics. That volume /may/ exist on your web servers but it doesn't have to.
17:11 JoeJulian vmos: Also... ,,(php)
17:11 glusterbot --negative-timeout=HIGH --fopen-keep-cache
17:11 glusterbot vmos: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH
17:11 JoeJulian Heh, reads like a udp joke.
17:13 JoeJulian And finally, a minor correction to what LoudNoises stated, "there [may be] some performance issues". Some engineers seem to meet with success while others don't. YMMV.
17:14 LoudNoises yea, sorry, didn't mean to discourage people from trying it :)
17:14 JoeJulian That's okay.
17:14 JoeJulian I just wish there was some easy way to figure out why some people can't succeed while others do.
17:18 devoid left #gluster
17:24 davinder joined #gluster
17:26 robo joined #gluster
17:28 hagarth joined #gluster
17:38 manik joined #gluster
17:40 thomasrt I've got a problem with a self-heal that isn't correcting a long-standing issue.  The error message in the glustershd.log looks like
17:40 thomasrt gdata002 [2013-08-26 11:34:02.992011] E [afr-self-heald.c:685:_link_inode_update_loc] 0-data-replicate-0: inode link failed on the inode (00000000-0000-0000-0000-000000000000)
17:40 thomasrt looking around on the local directories, I notice that one server is missing a directory at the root level
17:41 thomasrt it's not getting created by any self-heal crawl.  A targed state of the directory or a file child that should be stored there also doesn't resolve the problem.
17:41 thomasrt how can I get this directory to heal?
17:41 bala1 joined #gluster
17:48 GomoX What is the format for mount options in fstab? (i.e negative-timeout)
17:52 plarsen joined #gluster
17:58 plarsen joined #gluster
17:58 mmalesa joined #gluster
18:01 thomasrt quiet today.  Lots of questions, no answers
18:03 thomasrt in regards to my question above, what happens if I manually create the missing directory?  Do directories need matchine $BRICK/.glusterfs entries?
18:07 semiosis chirino: ping
18:08 psharma joined #gluster
18:09 chirino semiosis: pong
18:10 semiosis see pm
18:13 GomoX Well, changing the mount options really made a huge difference for my PHP system
18:14 GomoX :D
18:25 GomoX apc.stat on the other hand doesn't seem to change muchg
18:34 GomoX I used negative-timeout=120,fopen-keep-cache,e​ntry-timeout=120,attribute-timeout=120
18:35 GomoX What are the implications of storing log files on such a volume? i.e from a conflicts point of view
18:38 bulde joined #gluster
18:40 compbio joined #gluster
18:42 mmalesa joined #gluster
18:43 JerryM left #gluster
18:46 wushudoin joined #gluster
18:52 JuanBre joined #gluster
18:53 plarsen joined #gluster
18:53 JuanBre hi, I had some problems with two servers that generated a split-brain problem in thousands of files...
18:54 JuanBre I already know which of the servers has the right files...
18:55 JuanBre so my idea was to recursevely delete the conflictive files and the gfids
18:56 JuanBre but then I thought I might hit a file that is not conflictive...
18:57 JuanBre so my question is...what happens if i delete a file and its gfids from one replica but not from the other one (my guess is that I will generate a new split brain)
18:58 semiosis JuanBre: probably it will just get healed from the other copy
18:58 semiosis JuanBre: you could, of course, just create an unimportant file in your volume and test that out
18:59 Technicool joined #gluster
19:06 JuanBre semiosis: I have just tested it...it completely recovers the file...
19:08 JuanBre semiosis: I dont understand how does gluster realize the other replica has the right information...
19:08 semiosis well if only one has the information, then it can't be wrong
19:08 semiosis no conflict
19:09 semiosis for a real explanation though see the ,,(extended attributes) article
19:09 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
19:10 gmcwhistler joined #gluster
19:10 JuanBre yes...but that means some deleted files might come back if a failed node suddenly re appears...
19:11 semiosis ah, if there are changes made to a file while a replica is offline then the online copy gets marked (in xattrs) as having unsync'd changes
19:11 semiosis see that article
19:12 JuanBre I read that article...I mean unchanged and deleted files...
19:13 semiosis it works for directories too
19:13 semiosis so if a file is removed while a replica is offline the directory gets marked as having changes
19:14 JuanBre Ah! I was missing that...the directory is then marked...thanks!
19:15 shanks_ joined #gluster
19:16 semiosis yw
19:24 B21956 joined #gluster
19:25 B21956 left #gluster
19:26 B21956 joined #gluster
19:27 sprachgenerator joined #gluster
19:27 thomasrt on my issue, I dug a little deeper.  It turned out I had a subvolume offline.  There's nothing wrong with the brick filesystem that I can see.
19:27 thomasrt will restarting glusterd be enough to bring that subvolume back online?
19:37 bennyturns joined #gluster
19:37 bennyturns working still?
19:38 bennyturns is the BNE server working still?
19:38 bennyturns oops wrong channel :(
19:40 mmalesa_ joined #gluster
19:51 wushudoin joined #gluster
19:51 wushudoin joined #gluster
20:02 JoeJulian a2_, avati, hagarth: Could you tell me what additional information would be useful for bug 1001585 ? What string is hashed to produce the socket filename? I think it's a concatination of "/var/run${server}${brick_​path_with_slash_removed}" could you confirm or correct?
20:02 glusterbot Bug http://goo.gl/FKY5ZN unspecified, unspecified, ---, kparthas, NEW , glusterd loses connection with the bricks
20:03 robo joined #gluster
20:12 kaptk2 joined #gluster
20:26 rwheeler joined #gluster
20:37 jporterfield joined #gluster
20:37 mmalesa joined #gluster
20:54 foexle joined #gluster
20:57 jporterfield joined #gluster
20:57 xymox joined #gluster
21:02 badone joined #gluster
21:02 mmalesa_ joined #gluster
21:10 avati joined #gluster
21:10 hagarth_ joined #gluster
21:11 duerF joined #gluster
21:14 semiosis :O
21:27 avati joined #gluster
21:27 hagarth_ joined #gluster
21:34 fkautz joined #gluster
22:06 jporterfield joined #gluster
22:14 badone joined #gluster
22:22 awheele__ joined #gluster
22:48 fidevo joined #gluster
22:51 StarBeast joined #gluster
22:52 fidevo joined #gluster
22:54 bala joined #gluster
22:57 bala1 joined #gluster
23:14 awheeler joined #gluster
23:18 daMaestro joined #gluster
23:21 jporterfield joined #gluster
23:27 jporterfield joined #gluster
23:39 glusterbot New news from newglusterbugs: [Bug 990330] geo-replication fails for longer fqdn's <http://goo.gl/X4adNQ>

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary