Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:03 cpetersen_ you as well :)
00:09 MACscr|lappy_ joined #gluster
00:26 caitnop joined #gluster
00:30 caitnop joined #gluster
00:39 om2 joined #gluster
00:42 luizcpg joined #gluster
00:57 luizcpg joined #gluster
01:01 17WABODR0 joined #gluster
01:03 theron joined #gluster
01:09 nickage__ joined #gluster
01:15 nangthang joined #gluster
01:27 Lee1092 joined #gluster
01:53 baojg joined #gluster
01:56 theron joined #gluster
02:12 gem joined #gluster
02:14 6JTAABU08 joined #gluster
02:18 nishanth joined #gluster
02:25 nangthang joined #gluster
02:41 gildub joined #gluster
02:47 harish joined #gluster
02:58 ahino joined #gluster
03:01 haomaiwa_ joined #gluster
03:15 overclk joined #gluster
03:20 bharata-rao joined #gluster
03:21 kanagaraj joined #gluster
03:27 Manikandan joined #gluster
03:29 MACscr|lappy_ joined #gluster
03:37 RameshN_ joined #gluster
03:43 xoritor ok there is something really screwy with the networking
03:43 spalai joined #gluster
03:43 xoritor as soon as i try to enable a nic in a vm it dies
03:44 xoritor that is what seems to be killing everything
03:44 itisravi joined #gluster
03:47 sakshi joined #gluster
03:48 theron joined #gluster
03:52 nbalacha joined #gluster
03:53 atinm joined #gluster
04:01 haomaiwa_ joined #gluster
04:01 shubhendu joined #gluster
04:04 RameshN_ joined #gluster
04:17 ashiq joined #gluster
04:19 atinm joined #gluster
04:26 MACscr|lappy_ joined #gluster
04:30 nehar joined #gluster
04:31 gem joined #gluster
04:39 rafi joined #gluster
04:41 poornimag joined #gluster
05:01 haomaiwang joined #gluster
05:02 poornimag joined #gluster
05:06 skoduri joined #gluster
05:07 spalai left #gluster
05:08 spalai joined #gluster
05:09 spalai left #gluster
05:09 ovaistariq joined #gluster
05:11 ovaistar_ joined #gluster
05:13 aravindavk joined #gluster
05:14 Bhaskarakiran joined #gluster
05:18 hgowtham joined #gluster
05:18 pppp joined #gluster
05:23 anil joined #gluster
05:25 ramky joined #gluster
05:29 jiffin joined #gluster
05:31 calavera joined #gluster
05:41 Saravanakmr joined #gluster
05:41 kdhananjay joined #gluster
05:42 arcolife joined #gluster
05:44 ppai joined #gluster
05:52 karnan joined #gluster
05:57 nishanth joined #gluster
05:57 kanagaraj joined #gluster
05:57 nangthang joined #gluster
06:00 vimal joined #gluster
06:01 haomaiwa_ joined #gluster
06:12 vmallika joined #gluster
06:18 spalai joined #gluster
06:22 kanagaraj joined #gluster
06:31 theron joined #gluster
06:36 spalai joined #gluster
06:39 dusmant joined #gluster
06:40 karthikfff joined #gluster
07:01 haomaiwa_ joined #gluster
07:09 mhulsman joined #gluster
07:17 ctria joined #gluster
07:18 jtux joined #gluster
07:20 JoeJulian @nfs
07:21 glusterbot JoeJulian: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
07:30 tswartz joined #gluster
07:34 ovaistariq joined #gluster
07:44 poornimag joined #gluster
07:47 anil joined #gluster
08:01 haomaiwa_ joined #gluster
08:09 post-factum joined #gluster
08:12 [Enrico] joined #gluster
08:24 kovshenin joined #gluster
08:25 kanagaraj joined #gluster
08:26 b0p joined #gluster
08:38 ctria joined #gluster
08:42 ovaistariq joined #gluster
08:45 hchiramm_ joined #gluster
08:58 bfm joined #gluster
09:01 haomaiwa_ joined #gluster
09:01 atalur joined #gluster
09:08 fedele joined #gluster
09:12 fedele Hi, I'm new on IRC and on glusterfs. I would discuss what will be the configuration of my HPC 32-node cluster. Can you suggest me how to?
09:27 anmol joined #gluster
09:43 aravindavk joined #gluster
09:47 MACscr|lappy_ joined #gluster
09:49 mhulsman joined #gluster
10:01 haomaiwang joined #gluster
10:03 b0p joined #gluster
10:06 Slashman joined #gluster
10:10 Manikandan joined #gluster
10:23 SOLDIERz joined #gluster
10:25 ppai joined #gluster
10:28 b0p joined #gluster
10:30 ovaistariq joined #gluster
10:32 Wizek joined #gluster
10:32 luizcpg joined #gluster
10:42 jeek joined #gluster
10:46 bhuddah morning, fedele
10:50 fedele good mornig
10:51 fedele can you help me to understand how to configure my cluster?
10:51 bhuddah maybe. i lack working experience with clusters of that size.
10:52 bhuddah but i can try my best.
10:54 fedele on each node i have a 1TB disk and I would create a distributed volume collecting all the disks.
10:55 fedele I want to have a volume distributed on all 32 nodes called scratch
10:57 fedele I see that the distributed volume permits to write locally on standalone disk so writing a file on scratch I write on the local brick
10:57 fedele Is this correct?
10:59 itisravi joined #gluster
11:00 bhuddah fedele: can you rephrase that?
11:00 mhulsman joined #gluster
11:01 fedele Yes
11:01 bhuddah i'm sorry but it's hard to get up to speed to the concept you're thinking about.
11:02 bhuddah so i might need some questions first :)
11:02 fedele I create a distributed gluster volume using as brick the 1TB disk on each node
11:02 bhuddah fine.
11:05 fedele I make this so that if I write a file on that volume from a cluster node I produce a local write operation on that local brick
11:06 nbalacha joined #gluster
11:06 fedele But I can see the file globally on all cluster that mounts the volume
11:07 fedele Is this correct?
11:07 bluenemo joined #gluster
11:11 bhuddah that's an interesting question. so you always want to write local only?
11:12 fedele I want to mount the gluster volume on /scratch on all cluster nodes
11:12 jrm16020 joined #gluster
11:14 bhuddah yes. but usually writes are distributed among all nodes
11:14 fedele And when I write on /scratch on one node I expect tha (if possible) will be used the local brick (the local disk I added to the volume)
11:15 fedele Excuse me bhuddah, but I'm configuring a distributed volume
11:17 bhuddah why do you want to use the local brick for writing?
11:17 mhulsman joined #gluster
11:17 fedele If I see on my local brick I can see my file, but this file is not present on the brick of another node
11:18 bhuddah how do you mean that?
11:18 fedele Example:
11:18 fedele on node1 and on node 2 I create /brick
11:19 fedele on node3 I execute the command
11:19 fedele gluster create volume scratch node1:/brick node2:/brick
11:20 fedele after this I mount /scratch on node1, node2 and node3
11:20 fedele At the end
11:20 fedele from node1 I write file1 on /scratch
11:20 baojg joined #gluster
11:21 fedele I can see file1 in /scratch of node1, node2 and node3
11:21 bhuddah sure.
11:21 bhuddah that's how it is supposed to work.
11:22 fedele but but I see file1 in /brick only in node1
11:22 bhuddah you're not supposed to deal with the individual bricks.
11:22 bhuddah that is managed by glusterfs
11:23 bhuddah so if you write a file in /scratch it might end up on any node in the end.
11:23 bhuddah or even on multiple nodes if you enable replication.
11:24 fedele Of course, but I suppose that gluster optimize the operation writing (if this is possible) locally
11:25 mhulsman joined #gluster
11:25 bhuddah there are options for additional caching and/or striping of data when you need better/optimized performance for your use case.
11:25 bhuddah but there is no general best way of setup so you might need to test a few scenarios to see which setup works best for you.
11:26 fedele This means that is possible to force gluster to write locally if possible?
11:26 bhuddah maybe. but i still don't think this is even desireable behavior.
11:27 fedele Can you suggest the pages in the documentation I have to read
11:27 fedele OK
11:27 fedele I have this problem:
11:29 fedele I run a parallel program (MPI) that runs on 32 these 32 nodes and at the end of the execution it writes files on a NFS mounted /home
11:30 bhuddah okay.
11:31 fedele each process of these 32 write its own file (for example fort.1 fort.2 ... fort.32 I use fortran)
11:31 fedele So I have 32 parallel writes on network that saturate.
11:32 bhuddah at what speed?
11:32 fedele I can force the program to write on a local fs of the node, for example creating /scratch on each node.
11:36 fedele but the question is: may I see all these files from a single mount-point like a nfs-mount?
11:36 bhuddah that for sure.
11:37 fedele I can do this using gluster, correct?
11:38 bhuddah you can even export it as a nfs mount.
11:38 bhuddah so yes.
11:39 ira joined #gluster
11:39 fedele ok, now the new problem: for performance reason I decided to create a process each core in the node..... this means I have 640 processes
11:40 fedele This means: at the end I have a network congestion using NFS mounted /home
11:41 fedele and I would solve the problem using glusterfs
11:44 bhuddah what speed do you achieve over the network?
11:45 fedele I have a 40Gb Infiniband and the NFS server runs 8 nfsd daemon
11:47 jiffin1 joined #gluster
11:49 luizcpg joined #gluster
11:50 bhuddah so the nfs server is the bottleneck here?
11:50 bhuddah good.
11:51 luizcpg joined #gluster
11:52 fedele nfs server is the bottleneck: I have 640 processes that concurrently want use the disk
11:52 bhuddah but north of 40gb bandwidth...?
11:53 bhuddah or do you only use 1gb of the bandwidth?
11:53 fedele I use 40 gb
11:54 bhuddah honestly... that's really quick.
11:57 fedele I will remake tests using bonnie and iozone
11:58 luizcpg joined #gluster
11:59 bhuddah go ahead. and see if you can set up a full sized distributed striped volume in gluster and see how that performs.
12:00 fedele a question:
12:03 fedele after i create the volume with the command (for example): gluster volume create scratch node1:/brick node2:/brick ...
12:04 fedele What is the correct way to mount the volume scratch on each node?
12:05 bhuddah usually to put it in fstab.
12:06 fedele on node1: mount -t glusterfs node1:/scratch /scratch and on node2: mount -t glusterfs node2:/scratch /scratch etc
12:06 fedele So each node is a volume server or
12:06 bhuddah it does not matter which node you put in there. you can use the same node for each server
12:07 fedele ok understand.
12:08 fedele In any way: the speed of the disk is 240 MByte/sec
12:09 bhuddah that's a lot less than 40gbit/s afaik.
12:10 fedele Are 640 processes that write 2.4 Gbyte each in 2 hours
12:11 bhuddah ~1.5 TB?
12:11 fedele Yes the network is fast, but you have to consider also the transfer rate from server to disk
12:11 fedele In total are 1.5 TeraByte of data
12:11 bhuddah yeah. sure. that's why distributed writing might be faster. but in the end you cannot write more than the cummulated transfer rate of all servers together.
12:12 b0p left #gluster
12:12 bhuddah this is merely 213 MB/s btw...
12:13 fedele sure but I suppose that using gluster I'll not saturate the network.
12:13 bhuddah how can you saturate the network with 213MB/s if it has a bandwidth of 40gbit?
12:14 bhuddah i'm sorry, but at the moment i am confused.
12:15 fedele You are right: I suppose the problem is the NIC of the NFS server not the network.
12:15 bhuddah okay... so at least you have reached some sort of bottleneck.
12:16 bhuddah try setting up that distributed volume over all nodes. then you can probably reduce network load at least by one order of magnitude i think.
12:16 fedele I suppose that gluster will distribute my 640 access to the volume
12:17 fedele Thank you for your help
12:18 bhuddah fedele: by default it should distribute it over all bricks. but that shouldn't put the same load on the network as all clients hammering one single server.
12:19 ovaistariq joined #gluster
12:20 fedele For this reason I'm asking you if is better to use different volume server for example
12:22 fedele on node1: mount -t glusterfs node1:/scratch /scratch and on node2: mount -t glusterfs node2:/scratch /scratch
12:23 bhuddah afaik that's only used once while mounting to load the list of nodes.
12:23 bhuddah so that shouldn't make any difference.
12:25 fedele I have read somethink about gluster and the fuse support: fuse is able to change automatically the volume server... but I did not understand
12:26 bhuddah i'd suggest to try some basic tests first and then dive into details.
12:27 fedele OK, now I'm trying to create the volume... I have some problems
12:28 nbalacha joined #gluster
12:36 dlambrig joined #gluster
12:37 jiffin1 joined #gluster
12:38 fedele thank you bhuddah, now I go out. I will return on monday. goodbye, ciao
12:38 bhuddah fedele: have a nice weekend!
12:39 fedele thank you, also for you!
12:39 bhuddah thx
12:39 fedele left #gluster
12:47 julim joined #gluster
12:57 luizcpg joined #gluster
13:08 kdhananjay joined #gluster
13:22 sakshi joined #gluster
13:27 B21956 joined #gluster
13:31 nehar joined #gluster
13:35 doekia joined #gluster
13:36 unclemarc joined #gluster
13:51 overclk joined #gluster
13:52 plarsen joined #gluster
13:53 gem joined #gluster
14:01 EinstCrazy joined #gluster
14:06 RameshN_ joined #gluster
14:08 jmarley joined #gluster
14:20 baojg joined #gluster
14:25 ekuric joined #gluster
14:29 plarsen joined #gluster
14:34 ppai joined #gluster
14:38 luizcpg joined #gluster
14:39 gem joined #gluster
14:49 theron joined #gluster
14:54 Lee1092 joined #gluster
14:54 ivan_rossi joined #gluster
14:55 ivan_rossi left #gluster
14:55 haomaiwa_ joined #gluster
14:59 skylar joined #gluster
15:01 16WAAPYBX joined #gluster
15:08 spalai left #gluster
15:16 baojg joined #gluster
15:23 plarsen joined #gluster
15:32 rwheeler joined #gluster
15:32 bennyturns joined #gluster
15:32 theron joined #gluster
15:32 hamiller joined #gluster
15:49 cpetersen_ I was doing a bit of testing with my triple node replicated, nfs-ganesha shared volume last night where I took the network adapter of the primary node down.  Everything was fine until I tried to bring the node back up again.  It complained that I had the IP on the network already.  So I shut all of them down and brought them up again.  Shit, split-brain.
15:51 coredump joined #gluster
15:55 ovaistariq joined #gluster
15:55 baojg joined #gluster
15:57 bowhunter joined #gluster
16:00 wushudoin joined #gluster
16:00 wushudoin joined #gluster
16:01 spalai joined #gluster
16:01 haomaiwa_ joined #gluster
16:03 RameshN_ joined #gluster
16:14 chirino_m joined #gluster
16:19 cpetersen_ I enabled auto quorum, I think that should help
16:23 RameshN_ joined #gluster
16:27 MACscr|lappy joined #gluster
16:29 MACscr|lappy_ joined #gluster
16:32 shaunm joined #gluster
16:36 coredump joined #gluster
16:38 coredump joined #gluster
16:51 RameshN joined #gluster
16:51 theron joined #gluster
16:58 nehar joined #gluster
17:01 6A4ABVWE7 joined #gluster
17:05 gem joined #gluster
17:09 jiffin joined #gluster
17:14 RameshN_ joined #gluster
17:16 jiffin1 joined #gluster
17:19 baojg joined #gluster
17:19 RameshN__ joined #gluster
17:26 Larsen_ joined #gluster
17:28 RameshN_ joined #gluster
17:32 jiffin joined #gluster
17:33 squizzi_ joined #gluster
17:34 shubhendu joined #gluster
17:36 theron joined #gluster
17:37 calavera joined #gluster
17:38 baojg joined #gluster
17:42 jiffin joined #gluster
17:43 baojg joined #gluster
17:47 ovaistariq joined #gluster
17:48 armyriad joined #gluster
17:51 vimal joined #gluster
17:51 RameshN__ joined #gluster
17:56 JoeJulian @splitbrain
17:56 glusterbot JoeJulian: To heal split-brains, see https://github.com/gluster/glusterfs/blob/master/doc/features/heal-info-and-split-brain-resolution.md . Also see splitmount https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/ . For additonal information, see this older article https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
17:57 post-factum lol, nice catch, @glusterbot
17:58 cpetersen_ ty
17:59 cpetersen_ also, when I reboot the systems, cache-invalidation is disabled on the volume again
17:59 cpetersen_ is thsi intentional?
17:59 RameshN_ joined #gluster
18:01 7GHAB58MP joined #gluster
18:01 dblack joined #gluster
18:01 JoeJulian If you set something with "gluster volume set" it should be permanent.
18:02 JoeJulian If not, please file a bug
18:02 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
18:04 RameshN__ joined #gluster
18:05 RameshN_ joined #gluster
18:08 skylar joined #gluster
18:11 rafi joined #gluster
18:12 Rapture joined #gluster
18:14 ovaistariq joined #gluster
18:26 primusinterpares joined #gluster
18:32 nottc joined #gluster
18:43 baojg joined #gluster
18:44 calavera_ joined #gluster
18:46 ovaistariq joined #gluster
18:47 ovaistar_ joined #gluster
18:52 bennyturns joined #gluster
18:57 tom[] joined #gluster
18:59 bowhunter joined #gluster
18:59 markd_ joined #gluster
19:01 7GHAB59DO joined #gluster
19:09 calavera joined #gluster
19:11 jiffin joined #gluster
19:13 mdavidson I hav just added a replica to a distributed gluster volume - 3 nodes x 3 bricks x 2 replica - the read and write performance has dropped too far for my application while healing, is there anything I can do? volume has millions of smallish files in a 3 level directory hierarchy and looks like it will take a long time to replicate
19:14 bennyturns joined #gluster
19:17 JoeJulian maybe...
19:19 JoeJulian mdavidson: Ok, this has been coming up frequently, but not for everybody. There's some unlucky percentage that has this happen and I haven't found any commonality with the people that are experiencing this.
19:19 JoeJulian mdavidson: If you can try something, I would like you to change cluster.data-self-heal-algorithm to full (default is diff).
19:21 mdavidson JoeJulian, ok
19:25 mdavidson JoeJulian, I've set it. It will take a while to see if it is helping. Other details - newly built gluster on 3.7.6 - built as distributed, filled with data and run with application for a while, then added replica
19:28 luizcpg How can I forcibly delete all volumes from a given host ?
19:31 jiffin joined #gluster
19:33 ovaistariq joined #gluster
19:34 calavera joined #gluster
19:38 JoeJulian luizcpg: rm -rf /var/lib/glusterd/vols
19:38 luizcpg I did it
19:39 luizcpg It didnt’t work? the metadata gets regenerated … am I missing something ?
19:41 luizcpg forget… seems to be ok now
19:42 JoeJulian If the server is still part of the trusted peer group and another peer still has volumes defined, it will sync.
19:43 nottc joined #gluster
19:43 luizcpg how to detach ?
19:43 luizcpg stop glusted and then remove /var/lib/glusterd/vols ?
19:44 mdavidson JoeJulian, I currently have very low traffic, but directory reads are frequently taking > 5 secs and an ls -l on a directory holding a few hundred sub directories is taking > 20 secs (both were sub second before the replica)
19:44 JoeJulian gluster peer detach
19:44 baojg joined #gluster
19:44 luizcpg :)
19:44 luizcpg 1 - gluster peer detach
19:44 luizcpg 2 - rm -rf /var/lib/glusterd/vols
19:45 luizcpg 3 - restart glusterd
19:45 luizcpg ^ correct procedure ?
19:45 JoeJulian looks good to me.
19:45 luizcpg cool
19:45 luizcpg thx
19:45 luizcpg let me try
19:47 moss_ joined #gluster
19:48 JoeJulian mdavidson: consistency checks are done on replicated volumes when a lookup() is done. An ls -l has to do a lookup on every directory entry in order to get the fstat info to display. This ensures that you're getting accurate information. If a file was stale and that check wasn't done, you would get inaccurate information.
19:49 psilvao joined #gluster
19:50 JoeJulian So the tricks are, to keep from pulling fstat on every file in a directory, or reduce the number of entries, or reduce latency.
19:50 psilvao Hi people, i'm newby , i would to known what means the following message in  afr-self-heal-common.c:2869:afr_log_self_heal_completion_status Pending matrix: [ [ 0 0 ] [ 1 0 ] ]
19:51 psilvao thanks in advance!, Pablo
19:51 JoeJulian mdavidson: 20 seconds for a few hundred directory entries sounds to me like a latency issue.
19:53 JoeJulian psilvao: I velieve it means that there's a pending data heal on the second brick in a replica 2.
19:54 psilvao Thanks Joe, but this message show in a mount point of gluster, What are the steps to follow?
19:58 mdavidson JoeJulian, I think I'll drop it back to 1 replica again and test some different setups, thanks
20:01 jmarley joined #gluster
20:01 haomaiwang joined #gluster
20:04 JoeJulian psilvao: Get the ,,(extended attributes) for the brick root on the servers and remove the trusted.afr tags, then remove the entry for [0-]\+1 from .glusterfs/indices/xattrop (if it's there). See our conversation we had just yesterday about this: https://botbot.me/freenode/gluster/2016-01-28/?msg=58891257&page=6
20:04 glusterbot psilvao: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
20:05 JoeJulian Please don't pm me walls of logs. Use a ,,(paste) service and just post the link.
20:05 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
20:15 ovaistariq joined #gluster
20:17 luizcpg JoeJulian, not seems to be working the volume delete procedure
20:17 luizcpg http://pastebin.com/hWgC68uc
20:17 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:17 luizcpg ^ details here
20:18 luizcpg ideas are welcome
20:20 JoeJulian luizcpg: Just wipe it and start over: stop glusterd, rm -rf /var/lib/glusterd/*, start glusterd.
20:21 JoeJulian I've got a conference call to join in about 9 minutes. Going to be afk for a bit.
20:21 luizcpg so, what happens when the glusterd gets restarted ? the other nodes will not regenarate the data / metainfo ?
20:21 theron joined #gluster
20:22 JoeJulian No, because it'll be a brand new gluster server and the other nodes won't be allowed to talk to it.
20:22 luizcpg or stop gluster everywhere , rm everywhere an then restart ?
20:22 luizcpg ok
20:22 JoeJulian Unless you probe it from the exiting pool.
20:22 luizcpg let me try
20:32 jiffin1 joined #gluster
20:44 jiffin joined #gluster
20:46 baojg joined #gluster
20:54 theron joined #gluster
21:03 haomaiwa_ joined #gluster
21:04 luizcpg looks good now… thanks JoeJulian
21:04 theron_ joined #gluster
21:05 JoeJulian You're welcome.
21:09 jiffin joined #gluster
21:15 mhulsman joined #gluster
21:16 mhulsman joined #gluster
21:32 jiffin joined #gluster
21:37 skylar joined #gluster
21:37 mhulsman joined #gluster
21:47 baojg joined #gluster
21:49 wushudoin joined #gluster
21:54 theron joined #gluster
22:01 18WABXUR1 joined #gluster
22:05 ctria joined #gluster
22:18 jiffin joined #gluster
22:25 jiffin joined #gluster
22:31 calavera joined #gluster
22:47 baojg joined #gluster
22:50 CyrilPeponnet joined #gluster
22:51 CyrilPeponnet Hi Guys, I want to give the geo-rep another chance :p
22:51 CyrilPeponnet I have a simple use case
22:52 CyrilPeponnet I have a big file, a qcow and a symlink latest which point to this qcow. Each time there is a new build of the qcow, the symlink is updated to point to the latest image
22:53 CyrilPeponnet Using the geo-rep, can ensure that the image is transferted prior to update the symlink ?
22:53 CyrilPeponnet I have thinking about the sync-jobs N params set to 1, this way, I have a squential transfert (and not parralle taks).
22:54 CyrilPeponnet I'd like your thought about that @JoeJulian @hagarth ^^
22:58 calavera joined #gluster
23:00 JoeJulian I suspect there's a way, but I have no idea what it would be. I don't know what to look for to see if geo-rep has completed any given file.
23:01 haomaiwa_ joined #gluster
23:02 calavera joined #gluster
23:04 CyrilPeponnet If there is only one job and as the geo-repo is consuming the change log
23:04 CyrilPeponnet I can expect that the symlink will be updated after the image send ?
23:06 JoeJulian No necessarily.
23:06 JoeJulian *Not
23:07 CyrilPeponnet damn
23:07 JoeJulian If the image is created and the symlink created within the same sync window, it'll do them both. Not sure about what order though. You should test that and let me know.
23:11 shyam left #gluster
23:24 dblack joined #gluster
23:41 JoeJulian I have a file being self-healed with mixed 3.4.4 and 3.6.8 servers (can't upgrade the 3.4.4's until the self-heal is done). But when I chown a file being healed, it hangs ( http://ur1.ca/oglko ). I cleared all inode locks for the file and it was able to proceed (hope I didn't break self-heal doing that).
23:41 glusterbot Title: #316458 Fedora Project Pastebin (at ur1.ca)
23:42 JoeJulian hagarth: is it expected that a chown afr transaction would block until self-heal is complete?
23:42 JoeJulian (data heal)
23:44 hagarth JoeJulian: 3.4 and 3.6 have different versions of afr (v1 vs v2). where is the heal happening from? 3.4 shd, 3.6 shd or a client with a different version?
23:47 JoeJulian Good question. I disabled client heals, so at least that's out of the equation.
23:49 baojg joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary