Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 shaunm joined #gluster
00:17 gildub joined #gluster
00:38 m0zes joined #gluster
00:59 haomaiwa_ joined #gluster
01:00 zhangjn joined #gluster
01:01 haomaiwa_ joined #gluster
01:04 harish_ joined #gluster
01:06 haomaiwa_ joined #gluster
01:10 haomaiwang joined #gluster
01:10 auzty joined #gluster
01:20 dlambrig joined #gluster
01:34 Lee1092 joined #gluster
01:35 jcastill1 joined #gluster
01:39 jcastillo joined #gluster
01:46 haomaiwa_ joined #gluster
01:47 EinstCrazy joined #gluster
01:49 julim joined #gluster
01:58 calavera joined #gluster
01:59 calavera joined #gluster
02:01 haomaiwa_ joined #gluster
02:17 hgichon joined #gluster
02:22 haomaiwa_ joined #gluster
02:27 baojg joined #gluster
02:28 calavera joined #gluster
02:29 nangthang joined #gluster
02:45 calavera joined #gluster
02:55 calisto joined #gluster
02:57 hgichon joined #gluster
03:01 haomaiwa_ joined #gluster
03:05 [7] joined #gluster
03:08 zhangjn joined #gluster
03:28 VeggieMeat joined #gluster
03:29 lezo joined #gluster
03:30 squaly joined #gluster
03:31 twisted` joined #gluster
03:32 jermudgeon joined #gluster
03:37 beeradb joined #gluster
03:47 squaly joined #gluster
03:49 lanning joined #gluster
03:54 nishanth joined #gluster
03:57 poornimag joined #gluster
03:58 shubhendu joined #gluster
04:01 haomaiwa_ joined #gluster
04:06 sakshi joined #gluster
04:06 nbalacha joined #gluster
04:11 virusuy joined #gluster
04:11 virusuy joined #gluster
04:13 jermudgeon joined #gluster
04:18 itisravi joined #gluster
04:22 maveric_amitc_ joined #gluster
04:24 yazhini joined #gluster
04:28 kanagaraj joined #gluster
04:30 RameshN joined #gluster
04:32 gem joined #gluster
04:35 atinm joined #gluster
04:40 DV joined #gluster
04:41 ramky joined #gluster
04:43 rafi joined #gluster
04:49 DV__ joined #gluster
04:54 ramteid joined #gluster
04:58 neha joined #gluster
04:59 neha joined #gluster
05:01 haomaiwa_ joined #gluster
05:04 ppai joined #gluster
05:07 baojg joined #gluster
05:08 beeradb joined #gluster
05:09 PaulCuzner left #gluster
05:09 Intensity joined #gluster
05:10 skoduri joined #gluster
05:10 sage joined #gluster
05:11 harish joined #gluster
05:16 ndarshan joined #gluster
05:16 raghug joined #gluster
05:22 atalur joined #gluster
05:23 Bhaskarakiran joined #gluster
05:24 jiffin joined #gluster
05:26 pppp joined #gluster
05:30 deepakcs joined #gluster
05:32 hgowtham joined #gluster
05:35 vmallika joined #gluster
05:41 Manikandan joined #gluster
05:47 vimal joined #gluster
05:49 skoduri joined #gluster
05:51 hagarth joined #gluster
05:51 kdhananjay joined #gluster
05:51 baojg joined #gluster
05:54 R0ok_ joined #gluster
05:56 R0ok__ joined #gluster
05:58 R0ok_ joined #gluster
06:00 DV__ joined #gluster
06:01 haomaiwa_ joined #gluster
06:01 raghu joined #gluster
06:05 maveric_amitc_ joined #gluster
06:08 mhulsman joined #gluster
06:11 mhulsman joined #gluster
06:19 jtux joined #gluster
06:20 mhulsman joined #gluster
06:22 free_amitc_ joined #gluster
06:24 ashiq joined #gluster
06:24 mhulsman joined #gluster
06:25 itisravi_ joined #gluster
06:26 ashiq- joined #gluster
06:33 nishanth joined #gluster
06:35 ctria joined #gluster
06:38 shubhendu joined #gluster
06:41 SimmoTali joined #gluster
06:44 rgustafs joined #gluster
06:44 Forlan joined #gluster
06:45 jwd joined #gluster
06:53 kotreshhr joined #gluster
06:54 sakshi joined #gluster
06:55 shubhendu joined #gluster
06:55 ashiq joined #gluster
06:57 anil joined #gluster
06:59 David_Varghese hello, i have 6 vm and replicate to all 6. is it that a bad practive to have gluster client on all 6 vm? im using it to LB web using haproxy
06:59 David_Varghese and also im trying to copy 1.2GB files to gluster. its very slow and sometimes its stuck/hang. how can i improve the performance when copying file.
07:01 haomaiwa_ joined #gluster
07:02 onorua joined #gluster
07:05 ashiq joined #gluster
07:07 nangthang joined #gluster
07:07 Lee- joined #gluster
07:09 hagarth David_Varghese: normally a replica factor of 3 should be good enough for most use cases.
07:09 hagarth David_Varghese: unless you have a high ratio of reads to writes, higher replication factor is usually not advised.
07:18 fsimonce joined #gluster
07:18 ju5t joined #gluster
07:19 haomaiwa_ joined #gluster
07:24 DV joined #gluster
07:24 jwd joined #gluster
07:28 hchiramm_home joined #gluster
07:30 social joined #gluster
07:31 DV joined #gluster
07:35 papamoose2 joined #gluster
07:36 sabansal_ joined #gluster
07:37 DV__ joined #gluster
07:44 skoduri joined #gluster
07:48 Philambdo joined #gluster
07:59 Pupeno joined #gluster
08:01 haomaiwang joined #gluster
08:08 arcolife joined #gluster
08:14 zhangjn joined #gluster
08:15 LebedevRI joined #gluster
08:18 streppel joined #gluster
08:18 streppel hey
08:20 social joined #gluster
08:20 streppel i'm trying to setup a simple 2-node gluster configuration with samba shares. it partially works now, i can connect to one node and write files there, and the files appear on the other node. the problem is my second node (gluster2) has a horrible performance via samba. i checked that i can reach the other shares on this server with full bandwidth, so that is not the problem. both machines run on fedora server 22 and are up to date with no
08:20 streppel external sources added. they are joined into a domain with a win2008r2 DC.
08:23 zhangjn joined #gluster
08:23 Slashman joined #gluster
08:23 Philambdo joined #gluster
08:28 jwd joined #gluster
08:29 streppel by horrible performance i mean i get about 355KB/s writing on node 2
08:30 SimmoTal_ joined #gluster
08:31 nisroc joined #gluster
08:35 karnan joined #gluster
08:37 itisravi joined #gluster
08:38 Philambdo joined #gluster
08:39 streppel my /var/log/glusterfs/etc-glusterfs-glusterd.vol.log contains "[2015-09-15 08:36:52.838366] W [socket.c:620:__socket_rwv] 0-management: readv on /var/run/bf27c374fb301d59ea7b7e8eb8eff058.socket failed (Das Argument ist ungültig)
08:39 streppel " about once every 3 seconds
08:39 mhulsman joined #gluster
08:39 Philambdo joined #gluster
08:40 Philambdo joined #gluster
08:48 poornimag joined #gluster
08:48 zhangtao__ joined #gluster
08:56 [Enrico] joined #gluster
08:57 PaulCuzner joined #gluster
08:57 PaulCuzner left #gluster
09:01 RedW joined #gluster
09:04 zhangtao__ joined #gluster
09:05 haomaiwa_ joined #gluster
09:11 overclk joined #gluster
09:13 nishanth joined #gluster
09:14 harish joined #gluster
09:14 Saravana_ joined #gluster
09:14 stickyboy joined #gluster
09:22 spalai joined #gluster
09:27 Manikandan joined #gluster
09:32 nbalacha joined #gluster
09:33 haomaiwang joined #gluster
09:35 Romeor guys.. where do i read release notes about 3.6.5 ??
09:47 DV joined #gluster
09:51 kdhananjay left #gluster
09:53 Romeor nothing on github
09:54 Pupeno_ joined #gluster
09:55 hagarth Romeor: http://www.gluster.org/pipermail/gluster-devel/2015-August/046570.html
09:55 glusterbot Title: [Gluster-devel] glusterfs-3.6.5 released (at www.gluster.org)
09:55 Romeor yeah. just found it
09:55 Romeor tnx
10:01 haomaiwa_ joined #gluster
10:05 David_Varghese hagarth, im trying to copy 1.2GB files to gluster. its very slow and sometimes its stuck/hang. how can i improve the performance when copying file.
10:05 PaulCuzner joined #gluster
10:06 Romeor David_Vargese, how do you copy the file? (what cmd)
10:09 overclk joined #gluster
10:10 rastar streppel: do you mean you are running samba on both nodes and getting different performance from both nodes?
10:10 haomaiwang joined #gluster
10:10 _shaps_ joined #gluster
10:15 hagarth David_Vargese: if you are using 6-way replication, gluster tries to ensure that  all 6 writes are on the bricks before it acknowledges the write to your application. this would mean that your performance would be bound by the slowest disk in your setup and network latency across those 6 nodes
10:16 shubhendu joined #gluster
10:16 baojg joined #gluster
10:19 jwd joined #gluster
10:21 julim joined #gluster
10:22 haomai___ joined #gluster
10:23 streppel rastar: yes, exactly this. since we only have 100mbit network i want to balance the load by providing multiple endpoints the clients can connect to.
10:24 rastar streppel: could you paste the gluster vol info @paste
10:24 rastar @paste
10:24 glusterbot rastar: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
10:26 raghu joined #gluster
10:26 streppel rastar: http://termbin.com/x8cn
10:27 rastar streppel: ok, now how are the two samba servers synchronized?
10:27 rastar are you using ctdb
10:28 streppel currently they are not synchronized (except there is a built-in mechanism to do this)
10:28 rastar If you use multiple samba servers to export the same Gluster volume without ctdb you have risk of data corruption as locks are not shared.
10:29 rastar However, that should be a reason for performance drop
10:29 rastar The only possible reasons are
10:29 rastar a. the network from windows client to node2  isn't good enough
10:30 rastar actually that is the only reason I can think of
10:30 streppel i verified this by uying a different share on the same node (node2) and was able to achieve full performance
10:30 streppel using*
10:31 rastar streppel: oh yes you said that, sorry
10:31 rastar streppel: you are using the vfs plugin method or are you re exporting fuse mount?
10:31 streppel vfs plugin method
10:32 streppel http://termbin.com/qg0h
10:32 streppel i'm using the exact same config on both nodes, that's why i don't understand why it works on one and not on the other
10:34 rastar streppel: config looks good
10:34 owlbot joined #gluster
10:34 stickyboy joined #gluster
10:36 rastar streppel: does gluster volume heal <volname> info show anything?
10:37 streppel rastar: http://termbin.com/q7g2
10:39 rastar streppel: everything looks good.
10:39 rastar streppel: could it be some hardware issue on node 2?
10:40 rastar streppel: by theory your write speeds on this gluster vol should go up to 6MB/s
10:40 rastar streppel: what is the write speed you get from node1?
10:40 streppel rastar: i don't think so, just as i said, i can connect to it via a regular samba share and transfer files back and forth at full speed, so it's not the nic, memory should be fine too
10:41 streppel rastar: ~10MB/s
10:41 rastar streppel: ok, one last check, please shutdown Samba server and try the speed from node 2 now
10:41 rastar *samba server on node1
10:41 streppel ok
10:42 streppel rastar: again at 355KB/s
10:43 rastar streppel: ok, so it is not multi node samba causing this
10:44 rastar streppel: I have run out of ideas. You can try a fuse mount of same vol on node2 itself and check to eliminate samba completely.
10:44 rastar streppel: mount -t glusterfs node2:/volname /mnt
10:45 rastar and check for write speed here. If the speed is bad then its gluster issue else its samba issue.
10:45 Bhaskarakiran joined #gluster
10:45 ekuric joined #gluster
10:46 shubhendu joined #gluster
10:46 suliba joined #gluster
10:50 Manikandan joined #gluster
10:51 ndarshan joined #gluster
10:51 streppel rastar: http://termbin.com/ekfs
10:52 streppel didn't improve by setting a bigger blocksize
10:53 klaxa joined #gluster
10:53 pppp joined #gluster
10:55 raghug joined #gluster
10:56 Bhaskarakiran joined #gluster
10:57 ashka hi, I have an issue, I want to mount a glusterfs volume automatically at boot but it runs through a software tunnel (openvpn). The issue there is that _netdev seems to still trigger the mount too early, so the mount timeouts
10:57 streppel ashka: you could use autofs to mount the volume on demand. additionally it will keep retrying until the tunnel is available iirc
10:58 ashka streppel: I'll see about that, thanks
10:59 streppel ashka: else you could use a starup script for the network interface, it it will run when the tunnel is available and automatically mount the share then. and with an if-down script unmount it when the connection closes
11:00 ashka could work as well yeah
11:01 haomaiwa_ joined #gluster
11:05 poornimag joined #gluster
11:11 ira joined #gluster
11:14 bennyturns joined #gluster
11:18 raghug joined #gluster
11:32 Philambdo1 joined #gluster
11:36 jcastill1 joined #gluster
11:37 kotreshhr left #gluster
11:38 spalai left #gluster
11:39 EinstCrazy joined #gluster
11:40 rastar streppel: sorry, I was afk
11:41 jcastillo joined #gluster
11:42 rastar streppel: highly unlikely but I will ask, is the network between node1 and node2 asymmetric?
11:42 streppel rastar: no problem. no its a symmetric network
11:42 shubhendu joined #gluster
11:45 kkeithley1 joined #gluster
11:45 rastar streppel: do you know how replicate volume works?
11:46 rastar streppel: the data is synchronously written to both the bricks. So, if the client is connected to node2 Samba server, data goes from windows client to node2, simultaneously written on both node1 and node2 bricks and then return success.
11:48 streppel rastar: but for reading i could use either one of the nodes to retrieve the data, so i can loadbalance. (reading is much more common in our usecase than writing)
11:48 rastar streppel: yes, yes
11:48 jiffin1 joined #gluster
11:48 rastar this was just for write
11:48 rastar what I am trying to say is whether you use Samba on node1 or node2, data is written on both
11:48 rastar hence disk is not the bottle neck
11:49 streppel yep, of course
11:49 rastar streppel: check if name resolution is fine between node1 and node2
11:50 streppel rastar: works fine
11:50 rastar streppel: ok, I am out of ideas.
11:51 streppel rastar: gluster1 => cname to an a-record which resolves to the correct IP, gluster2 the same
11:51 rastar yes, but does it take a long time when resolving node1 from node2?
11:52 streppel instantly resolved
11:52 streppel rastar:the DC handling DNS is half a meter away in the same network, that shouldn't be the issue either
11:53 rastar streppel: you could ask someone from afr team here. we have eliminated all Samba connections.
11:54 rastar streppel: the problem statement is: write performance is bad for a 1x2 volume on a fuse mount on node2 while it is fine on node1
11:55 rastar ndevos: ^^
11:55 rastar ndevos: if you already know about such bug or if you know who might know about it
11:56 ndevos rastar: I dont know, would that not be an AFR issue?
11:57 rastar ndevos: yes, but wanted to know if anything in setup that could lead to it
11:57 rastar itisravi: ^^
11:58 rastar itisravi: 5 lines above
12:02 rafi REMINDER: Gluster Bug Triage is now starting in #gluster-meeting
12:03 streppel thanks for your help already! :)
12:03 itisravi rastar: streppel does the mount log on node2 show any errors?
12:03 rjoseph joined #gluster
12:05 streppel i don't get an error upon mounting, nothing in dmesg, nothing in /var/log/messages
12:05 jiffin joined #gluster
12:06 itisravi streppel: anything suspicious in /var/log/glusterfs/<fuse-mount-point>.log?
12:07 streppel itisravi: [client-handshake.c:1210:client_setvolume_cbk] 0-DFS-client-0: Server and Client lk-version numbers are not same, reopening the fds
12:07 streppel , but besides that nothing
12:07 glusterbot streppel: This is normal behavior and can safely be ignored.
12:07 David-Varghese joined #gluster
12:07 itisravi ^True that.
12:08 edong23 joined #gluster
12:10 Philambdo joined #gluster
12:14 shubhendu joined #gluster
12:17 jtux joined #gluster
12:19 mpietersen joined #gluster
12:19 hagarth joined #gluster
12:20 mpietersen joined #gluster
12:24 calisto joined #gluster
12:25 itisravi streppel: Just a crazy thought, are there any iptable rules on node-2 that might drop outgoing packets?
12:25 DV joined #gluster
12:27 streppel itisravi: iptables should be off: http://termbin.com/157b
12:29 streppel itisravi: just flushed all rules, no change
12:33 itisravi streppel: can you scp a 500 MB file from node1 to node2 and then from node2 to node1 (all this outside gluster- just a machine to machine copy) and see if the throughput is same in both cases?
12:35 unclemarc joined #gluster
12:35 streppel i will, sec
12:35 itisravi okay
12:38 jcastill1 joined #gluster
12:39 shaunm joined #gluster
12:42 streppel from node2 to node1 is horribly slow (280KB/s). so that seems to be the issue...
12:47 streppel even tho i don't understand why i can write with 100mbit/s on a plain samba sahre but not on the glusterfs share, but since it seems to be the hardware (server itself or network) i'll see what i can do to fix this
12:48 streppel ah i get it now. since the writing has to be synchronized, gluster writes the data to node1. that happens at ~300KB/s, so it can't get new data until the old is written. that's why...
12:48 streppel thanks for helping me diagnose it
12:56 jcastillo joined #gluster
12:56 neha joined #gluster
12:56 amye joined #gluster
13:00 itisravi streppel: great
13:00 shubhendu joined #gluster
13:11 qubozik joined #gluster
13:16 spalai joined #gluster
13:17 Bhaskarakiran joined #gluster
13:22 _NiC joined #gluster
13:23 qubozik joined #gluster
13:25 natarej joined #gluster
13:25 natarej_ joined #gluster
13:29 harold joined #gluster
13:31 plarsen joined #gluster
13:40 Manikandan joined #gluster
13:40 LebedevRI joined #gluster
13:40 _Bryan_ joined #gluster
13:46 shaunm joined #gluster
13:48 arcolife joined #gluster
13:50 yangfeng joined #gluster
13:51 qubozik joined #gluster
13:52 nbalacha joined #gluster
13:56 ctria joined #gluster
13:57 dgandhi joined #gluster
14:14 hgowtham joined #gluster
14:15 TheCthulhu joined #gluster
14:17 neofob joined #gluster
14:18 plarsen joined #gluster
14:18 plarsen joined #gluster
14:22 qubozik joined #gluster
14:27 squizzi_ joined #gluster
14:29 onorua joined #gluster
14:35 julim joined #gluster
14:37 kbyrne joined #gluster
14:40 kbyrne joined #gluster
14:41 papamoose joined #gluster
14:46 dijuremo joined #gluster
14:48 Pupeno joined #gluster
14:49 ashiq joined #gluster
15:09 wushudoin joined #gluster
15:11 johnmark kkeithley: ping re: IRC meeting space
15:12 JoeJulian hiya johnmark.
15:12 johnmark JoeJulian: howdy!
15:12 JoeJulian How's the new gig?
15:13 JoeJulian and remember... this is logged for posterity.
15:16 johnmark JoeJulian: lol :)
15:16 johnmark kkeithley: dude, I don't have ops in the channel anymore, because I gave you founder ops
15:16 johnmark amye: ^^^
15:17 amye johnmark: Ha. And of course it's not showing up here. ;)
15:17 johnmark lol
15:21 wushudoin joined #gluster
15:22 ELCALOR left #gluster
15:34 gorfel joined #gluster
15:37 gorfel When adding new data to distributed-replicated setup 2x2=4 bricks, should I do it via a client using FUSE? It seems like a very inefficient way to add data (rsyncing 100 thousands of image files). Would it better to add a brick with the data already and then rebalance?
15:37 jiffin joined #gluster
15:39 JoeJulian gorfel: You can usually create a volume with the first brick loaded with data. It's considered "undefined" behavior, but we've been doing it for years.
15:39 JoeJulian but don't count on it to continue to work forever.
15:40 qubozik_ joined #gluster
15:40 Manikandan joined #gluster
15:41 JoeJulian It's also very inefficient to copy those 100 thousands of files onto a new hard drive formatted with xfs when you've been using ext4 all these years. But formatting your ext4 filesystem with xfs isn't going to convert it. Think similarly. You're loading a filesystem, not some synchronization tool.
15:44 raghug joined #gluster
15:44 gorfel JoeJulian: Ok. thanks. Your comment concerning filesystems makes sense. I think I will try loading a brick with the data first and then rebalancing. Rsyncing to the client is really slow.
15:44 JoeJulian two more bits...
15:45 JoeJulian A rebalance will move files from the single loaded brick to spread the files out (more or less) evenly between the distribute subvolumes. That's not the replica heal which is separate.
15:45 JoeJulian And rsync is really inefficient.
15:46 JoeJulian cpio is much faster.
15:46 JoeJulian s/inefficient/inefficient for this task./
15:46 glusterbot What JoeJulian meant to say was: And rsync is really inefficient for this task..
15:49 gorfel yes, your are probably right considering the encryption overhead of rsync. I'll copy the data some way, create the volume, do a rebalance and the a replica heal.
15:54 JoeJulian encryption, multiple stat checks, temporary filenames that are renamed causing dht misplacement...
15:54 _maserati joined #gluster
16:07 rarrr joined #gluster
16:10 rarrr testing gluster 3.3 self-healing, I am facing the following issue. Using a replicated volume (based on two bricks, located on two different servers), if I get down one of the bricks/servers, write to the volume, and get the offline brick/server up again, the data gets instantly copied to the now-available brick/server. Is that the expected behaviour for self-healing in 3.3?. After reading the docs I expected some delay on shd to replicate data to the recove
16:10 rarrr red brick
16:11 plarsen joined #gluster
16:12 jwaibel joined #gluster
16:15 Rapture joined #gluster
16:18 JoeJulian It's possible for there to be a delay, but once shd reconnects to the missing brick, it should trigger the self-heal immediately.
16:18 rafi joined #gluster
16:19 rarrr thanks JoeJulian
16:23 rarrr joined #gluster
16:23 skoduri joined #gluster
16:24 rarrr JoeJulian, is there any scenario where self-healing has to be triggered manually (with gluster volume heal, commands)?
16:24 rarrr if shd takes care of automatic self-healing
16:24 JoeJulian There shouldn't be.
16:25 JoeJulian But you know how that goes.
16:25 rarrr JoeJulian, k, thanks
16:26 free_amitc_ joined #gluster
16:26 maveric_amitc_ joined #gluster
16:46 onorua joined #gluster
16:51 DV joined #gluster
17:00 social joined #gluster
17:04 jalljo joined #gluster
17:09 chris_ joined #gluster
17:11 chris_ joined #gluster
17:11 mhulsman joined #gluster
17:11 josh joined #gluster
17:12 qubozik joined #gluster
17:14 chris_ left #gluster
17:15 Manikandan_ joined #gluster
17:19 josh222_ joined #gluster
17:21 josh222_ joined #gluster
17:21 cholcombe joined #gluster
17:25 josh222_ joined #gluster
17:25 spcmastertim joined #gluster
17:26 josh222_ joined #gluster
17:27 josh222_ joined #gluster
17:29 qubozik joined #gluster
17:30 josh222_ joined #gluster
17:33 josh222_ joined #gluster
17:34 josh222_ joined #gluster
17:36 josh222_ not sure quite how this works, but i had some questions regarding gluster quorum.  can anyone help?
17:37 skoduri joined #gluster
17:51 qubozik joined #gluster
17:55 nishanth joined #gluster
17:59 qubozik joined #gluster
17:59 jocke- joined #gluster
18:01 _maserati_ joined #gluster
18:03 jocke- left #gluster
18:06 JoeJulian josh222_: Usually, yes, someone can help.
18:07 Manikandan__ joined #gluster
18:07 jockek left #gluster
18:07 jockek joined #gluster
18:21 calisto joined #gluster
18:27 virusuy can I have a distributed-replicated volumes with different versions of gluster ? (2 nodes in 3.4 and 2 in 3.7) ??
18:30 Manikandan_ joined #gluster
18:33 JoeJulian It's not recommended. It might work though.
18:36 virusuy thanks JoeJulian
18:36 wushudoin joined #gluster
18:36 Manikandan__ joined #gluster
18:50 mhulsman joined #gluster
18:56 amye1 joined #gluster
18:56 timotheus1 joined #gluster
19:08 Pupeno joined #gluster
19:13 wushudoin joined #gluster
19:14 PaulCuzner left #gluster
19:22 Pupeno joined #gluster
19:47 _maserati joined #gluster
20:03 wolsen joined #gluster
20:18 _maserati joined #gluster
20:24 virusuy Hi guys, if I have a distributed replicated volume (2x2) to expand it i need to add 4 more bricks, right
20:25 virusuy when i run gluster volume info it says ( Number of Bricks: 2 x 2 = 4 )
20:32 gorfel joined #gluster
20:32 msvbhat virusuy: You can add 2 more bricks to make it 3*2
20:32 virusuy msvbhat:  cool
20:33 gorfel I have question concerning gluster across two datacenters. iperf3 measures about 5Gbit/sec first site -> second site and 3Gbits/sec second site -> first site. There are two gluster servers on each site. Would it be best use geo-replication or use normal replication in such a scenario?
20:36 badone joined #gluster
20:44 JoeJulian gorfel: latency is usually the bigger problem.
20:50 Nebraskka Heya! Expanded my volume by adding another replica (volume add-brick aaa replica 3 ...). Do I need to run volume heal aaa full, to get it synced from previous 2 replicas?
20:50 Nebraskka or there are another practice on syncing?
20:50 Nebraskka as i understand, by default added replica is empty
20:51 _maserati_ joined #gluster
20:51 gorfel JoeJulian: Latency tested with ping -c 10 -i 0.2: site 1 -> site 2: rtt min/avg/max/mdev = 1.303/1.408/1.560/0.083 ms, site 2 -> site 1: rtt min/avg/max/mdev = 1.336/1.375/1.445/0.054 ms
20:52 Nebraskka i'm not expert on glusterfs yet, but 1ms latency sounds good to me for usual replica, without geo
20:54 Nebraskka but it's depends on the goal
20:54 Nebraskka if another datacenter is just disaster-recovery one, and no production stuff here works, i think there is no need in master-master setup and geo would be enough
20:55 Nebraskka but if both datacenters are getting load, geo wouldn't work as you expected, gorfel
20:55 Nebraskka it's syncing files asynchronically in master-slave style, so no guarantee that it would be fresh enough on other side
20:55 _maserati joined #gluster
20:56 gorfel Nebraskka: Thanks for explaining that. So, geo-rep is only for backup scenarios.
20:56 Nebraskka afaik yes
20:57 gorfel Nebraskka: I tested earlier with a distributed-replicated setup across the datacenters, but I was unsure if this was the correct approach when storing 100's of thousands of small files. I want to get it right now, and not later when in production. :)
20:58 qubozik_ joined #gluster
20:59 muneerse joined #gluster
21:02 gorfel Nebraskka: Disaster test with 500 files (shutting down gluster servers on site 1) worked as expected, so perhaps distributed-replicated is the correct way to ensure both acceptable read performance (write is not important for me so much) and data redundancy in my scenario.
21:07 amye joined #gluster
21:11 Nebraskka gorfel, i see
21:13 badone joined #gluster
21:13 Nebraskka i don't know how fast failed part of gluster would sync that amount of tiny files, but gluster at all doesn't have any problems with filesize itself, it's about latency
21:14 Nebraskka gorfel, because of the latency and double checks on both sides of cluster for every file, there's a little overhead on each file copy, so when copy lot's of files, there could be impression that it's doing it slowly, but actually it's just checking this file existance before every file transaction
21:15 gorfel Nebraskka: Great! Thanks for clearing that up. It took a very long time just to copy over 2 of 80 gigs of files using glusterfs fuse mount. What I intend to do now is to copy the files directly to brick before creating the volume and do the rebalancing/self-heal afterwards.
21:17 dgandhi joined #gluster
21:21 Nebraskka gorfel, hmm, i'm not sure it would get replicated if copy files directly in brick folder (if brick is just a folder), but as i know nfs mount a little faster than fuse mount, because not doing failover checks,
21:21 Nebraskka but in this case you'll need some balancer like haproxy or such, because nfs mounts can't failover themselves, they just mounts to one server
21:21 Nebraskka maybe nfs mount could be used for initial copy
21:23 Pupeno joined #gluster
21:24 gorfel It seemed the fuse mount could handle the failover automatically, though.
21:25 gorfel I also see there are options when mouting glusterfs where you can specify a backupserver.
21:25 gorfel I would rather not use nfs.
21:33 gorfel Nebraskka: Thanks for your help. More testing tomorrow.
21:36 gorfel Nebraskka: Sorry, I didn't read your answer concerning the fuse mount properly. If the experiment with the direct copy to brick folder fails to replicate I will certainly try the nfs route.
21:48 _maserati im confused, how do i make a new volume with 0 stripe, 2 replica ?
21:48 _maserati i tried:
21:48 _maserati volume create dev_volume replica 2 transport tcp codncr-st-2410:/mnt/glusterdev codncr-st-2411:/mnt/glusterdev
21:49 _maserati nvm ... it worked the 2nd try...
22:02 muneerse2 joined #gluster
22:05 _maserati I moved all my files out of a glusterfs just to a temp storage place, deleted gluster volume, made a new one, and now that im trying to bring all that data back into gluster it's throwing these out everywhere:
22:05 _maserati mv: setting attribute `trusted.gfid' for `trusted.gfid': Operation not permitted
22:05 _maserati is this bad?
22:07 Rapture joined #gluster
22:08 DV__ joined #gluster
22:18 _maserati JoeJulian: Sorry to bug ya, but could you read my last 3 chat lines
22:23 JoeJulian Are you using _maserati Is that in marker?
22:24 _maserati what?
22:25 JoeJulian Those messages. They usually come with what function, c file, and line number they're coming from.
22:25 _maserati im using "mv"
22:25 JoeJulian Also they note whether they're a warning, info, error, etc.
22:25 _maserati mv OldData /newglustervol/
22:26 _maserati it seems to be moving every file in, albiet with these operation not permitted lines
22:27 _maserati i was just kinda hoping it'd treat these as regular files when i move them back into a new gluster volume
22:27 _maserati regular new files
22:28 _maserati its almost done moving all the data back into the new glusterfs vol. All files are showing on each brick as it should
22:28 _maserati I just hope these operation not permitted stuff isnt going to cause chaos
22:30 capri joined #gluster
22:34 Pupeno joined #gluster
22:42 amye joined #gluster
22:50 cliluw joined #gluster
22:51 gildub joined #gluster
23:05 Rapture joined #gluster
23:30 Pupeno joined #gluster
23:40 squizzi_ joined #gluster
23:48 squizzi joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary