Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-12-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 delhage joined #gluster
00:22 diegows joined #gluster
01:12 Telsin joined #gluster
02:13 EinstCrazy joined #gluster
03:05 shyam joined #gluster
03:08 Pupeno joined #gluster
03:08 Pupeno joined #gluster
04:01 plarsen joined #gluster
04:17 Lee1092 joined #gluster
05:03 Intensity joined #gluster
05:10 julim joined #gluster
05:11 overclk joined #gluster
05:13 skoduri joined #gluster
06:01 cliluw joined #gluster
07:00 nishanth joined #gluster
07:16 haomaiwa_ joined #gluster
07:35 Pupeno joined #gluster
07:59 overclk joined #gluster
08:11 nishanth joined #gluster
08:11 julim joined #gluster
09:18 kotreshhr joined #gluster
09:35 Pupeno joined #gluster
09:37 armyriad joined #gluster
09:50 harish joined #gluster
09:52 kotreshhr left #gluster
10:05 zhangjn joined #gluster
10:05 zhangjn joined #gluster
10:06 mobaer joined #gluster
10:10 Pupeno joined #gluster
10:15 rafi joined #gluster
10:41 jmarley joined #gluster
10:42 kotreshhr1 joined #gluster
10:47 zhangjn joined #gluster
10:48 jwd joined #gluster
11:10 julim joined #gluster
11:11 badone joined #gluster
11:28 zhangjn joined #gluster
11:40 skoduri joined #gluster
12:27 zhangjn joined #gluster
12:32 kotreshhr1 left #gluster
12:32 DV joined #gluster
12:44 cyberbootje joined #gluster
12:57 hos7ein joined #gluster
13:08 Amun_Ra I've setup glusterfs (two master nodes), is there a way to force full sync from node2 to node1? they are out of sync for more than 2 days
13:15 Amun_Ra both nodes reports heal-failed files on node1
13:17 Amun_Ra hovewer gluster volume heal … info reports 0 entries to heal
13:24 d0nn1e joined #gluster
13:37 mobaer joined #gluster
13:38 cvstealth joined #gluster
13:40 natgeorg joined #gluster
13:40 malevolent_ joined #gluster
13:41 cabillman_ joined #gluster
13:41 ndevos_ joined #gluster
13:41 Ramereth|home joined #gluster
13:41 ndevos_ joined #gluster
13:41 julim joined #gluster
13:42 p8952_ joined #gluster
13:42 obnox_ joined #gluster
13:42 Champi_ joined #gluster
13:43 cyberbootje joined #gluster
13:43 deni_ joined #gluster
13:43 Jmainguy1 joined #gluster
13:44 mswart joined #gluster
13:45 and` joined #gluster
13:46 mlhess joined #gluster
13:47 steveeJ joined #gluster
13:47 Akee joined #gluster
13:48 marlinc joined #gluster
13:50 and` joined #gluster
13:50 Chr1st1an joined #gluster
13:51 lezo joined #gluster
13:54 janegil joined #gluster
14:04 haomaiwa_ joined #gluster
14:05 diegows joined #gluster
14:08 and` joined #gluster
14:15 kovshenin joined #gluster
14:16 EinstCrazy joined #gluster
14:19 and` joined #gluster
14:29 bivak joined #gluster
14:53 jwd joined #gluster
14:58 and` joined #gluster
15:01 haomaiwang joined #gluster
15:15 Akee1 joined #gluster
15:21 oytun joined #gluster
15:23 oytun Hello everybody, I was wondering about your average performance experience. I am seeing write performance of 2mb/s at the moment (`dd` command with 1M block, 50M single file). However, on the other hand, another 10MB file is taking more than 30 seconds to move from local to shared disk on a client server. Any ideas, experiences, tips or guidance?
15:24 cvstealth joined #gluster
15:25 ahino joined #gluster
15:32 rafi joined #gluster
15:43 haomaiwa_ joined #gluster
15:44 kovshenin joined #gluster
16:09 plarsen joined #gluster
16:35 dlambrig joined #gluster
16:52 shaunm joined #gluster
16:57 julim joined #gluster
17:02 hagarth joined #gluster
17:27 kotreshhr joined #gluster
17:43 mlncn joined #gluster
17:56 klaxa joined #gluster
18:27 JoeJulian Amun_Ra: Do not write to the bricks. This isn't a replication system, it's a clustered filesystem. Mount the volume and use that.
18:28 Amun_Ra JoeJulian: I do not write to the brick, I write directly to mounted directory
18:30 JoeJulian Then I would check your network connections. The client should be writing to all replicas simultaneously.
18:30 Amun_Ra JoeJulian: I think I have a split brain, I restart second node while writing to first node; there should be a way to sync second node if I
18:30 Amun_Ra …if I'm sure node1 has the current data
18:30 JoeJulian @split-brain
18:30 glusterbot JoeJulian: To heal split-brains, see https://gluster.readthedocs.org/en/release-3.7.0/Features/heal-info-and-split-brain-resolution/ For additional information, see this older article https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/ Also see splitmount https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
18:31 JoeJulian But it should tell you if you have split brain in gluster volume heal info
18:31 JoeJulian Assuming you're using a current version
18:31 Amun_Ra I've read I have to turn both nodes offline and manually rsync data
18:31 JoeJulian Where did you read that garbage?
18:32 Amun_Ra some randome site on the net
18:32 JoeJulian I wish I could delete stupid stuff from the internet.
18:32 Amun_Ra :>
18:32 kotreshhr1 joined #gluster
18:33 JoeJulian Check your client logs. If I were a betting man, I'd put my money on firewalls.
18:34 Amun_Ra but I have 60GB data on node 1 and 59GB on node2
18:36 Amun_Ra it's not a split brain http://pastebin.com/dAiyHASz
18:36 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:36 Jmainguy http://fpaste.org is awesome
18:36 glusterbot Title: New paste Fedora Project Pastebin (at fpaste.org)
18:43 Amun_Ra so everything is fine but the data isn't there ;>
18:46 Amun_Ra df on both nodes: http://fpaste.org/297949/42759814/
18:46 glusterbot Title: #297949 Fedora Project Pastebin (at fpaste.org)
18:51 Amun_Ra I guess I'll have to repopulate tomorrow both bricks from the scratch or even recreate them
18:54 mobaer joined #gluster
18:57 kotreshhr1 left #gluster
18:58 oytun joined #gluster
18:59 ahino joined #gluster
19:02 oytun I am encountering some weird performance cases here. One of the bricks is giving 27mb/s write and the other one is 2mb/s.
19:02 oytun Has any of you experienced such a thing?
19:04 oytun I upgraded disks to SSD, didn't change at all
19:04 oytun increased to 2.7mb/s, so to say.
19:06 oytun both volumes have identical configurations
19:06 oytun the only difference I see is the slow one is %95 full, the other is around 25%
19:06 oytun another difference is, the slower volume has 3 clients connecting to it.
19:07 oytun faster one has 1 client
19:07 oytun cluster has only 2 nodes
19:27 purpleidea joined #gluster
19:27 purpleidea joined #gluster
19:28 julim joined #gluster
19:37 Amun_Ra I've removed files via client from node1 and node2, node1 .glusterfs dir has 259MB of data, node2 .glusterfs dir has 2.5G of data; I'll give glusterfs one more chance and switch to ocfs2/drbd if the second cluster data will end out of sync again
19:38 jwaibel joined #gluster
19:39 oytun I encounter this out of sync cases as well, but somehow when I request files from client, it always returns the correct one
19:41 jwd_ joined #gluster
19:42 Amun_Ra oytun: it will work if both nodes are online
19:42 oytun yeah, so I was assuming they should be in sync after a while
19:43 Amun_Ra oytun: I run across situation when gluster reported no files to heal and I had 1GB data less on second replica node
19:43 oytun mine is worse, I saw 20GB gap
19:44 oytun and now I am encountering very slow write speeds (2MB/s). Planning to do a refactoring in the app and transition to AWS S3.
19:46 Amun_Ra oytun: I need some stable replication of maildirs (ca 700GB of data) and first gfs test failed
19:47 oytun :(
19:48 Amun_Ra it would be better to have four master nodes, but I'll be satisfied with master/slave solution
19:50 mhulsman joined #gluster
19:51 Amun_Ra I'll change one thing for the second test, I'll setup bricks on the separate vlan and interface (I set up both on the same iface for the first test)
20:10 mlncn joined #gluster
20:14 EinstCrazy joined #gluster
20:23 gothos left #gluster
20:26 tree333 joined #gluster
20:43 julim joined #gluster
20:57 n-st joined #gluster
21:06 plarsen joined #gluster
21:37 devinmcelheran joined #gluster
21:38 devinmcelheran Hi, would someone be able to help me get gluster setip on CentOS 7? I'm getting dependency issues and all I've foudn online was one other person with the same issue maybe three weeks ago, no apparent solution, possible a problem within the epel repo.
21:40 Jmainguy I have it working on centos7, dont remember doing anything special
21:41 devinmcelheran Jmainguy, when was the most recent time you installed it?
21:41 Jmainguy few months atleast, maybe 4-5
21:41 Jmainguy but its running atm, your saying a fresh install will fail?
21:42 devinmcelheran Ah, the error I get is "Package: glusterfs-server-3.7.6-1.el7.x86_64 (glusterfs-epel) Requires: liburcu-bp.so.1()(64bit)" and then another one following it as well.
21:42 Jmainguy https://www.gluster.org/community/documentation/index.php/Getting_started_install
21:42 Jmainguy guessing you followed that and hit this error
21:42 devinmcelheran I'm sorry, I think I should specify, I'm trying to install the server.
21:44 devinmcelheran But, yes, that's essentially what I did, just using the same instructions from several other sites hoping I might find a working way.
21:44 devinmcelheran But whether I used the RHEL or CentOS epel repo, it didn't work.
21:44 Jmainguy k one sec
21:48 Jmainguy devinmcelheran: wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo into your /etc/yum.repos.d
21:48 Jmainguy devinmcelheran: yum install epel-release
21:49 Jmainguy devinmcelheran: yum install glusterfs-server
21:49 Jmainguy profit
21:51 Jmainguy hagarth: JoeJulian johnmark purpleidea semiosis https://www.gluster.org/community/documentation/index.php/Getting_started_install doesnt work for rhel/centos
21:52 devinmcelheran Jmainguy, awesome. Can you explain what it was that was holding me back?
21:52 Jmainguy I think the URL should be changed to http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo for those, and also I had to install epel centos to make this work
21:52 Jmainguy devinmcelheran: 1. you prolly didnt have epel installed, which is another awesome repo full of goodies
21:52 Jmainguy devinmcelheran: yum install epel-release hooked you up with that
21:52 Jmainguy and the URL listed at https://www.gluster.org/community/documentation/index.php/Getting_started_install is a 404
21:53 Jmainguy so I think the team just needs to update docs to say install epel as well, unless thye think they dont need to
21:53 Jmainguy in which case, one of those 5 dudes will respond and tell me to hush
21:54 devinmcelheran Ah, I use Arch as my daily driver, CentOS is a work thing. I need to brush up on it more, for sure. Is epel-release in the CentOS main repo?
21:54 Jmainguy yup
21:54 Jmainguy on rhel its more difficult to get epel-release installed, basically another wget
21:54 Jmainguy but on centos they made it really easy to add with just yum install epel-release
21:54 devinmcelheran I thought epel had to be added manually with a .repo file, no?
21:54 Jmainguy on rhel it does
21:55 devinmcelheran Ah
21:55 Jmainguy epel-release basically just installs that .repo file for you
21:55 devinmcelheran Good to know.
21:55 Jmainguy which is another reason, that i believe centos > rhel
21:55 Jmainguy =)
21:55 devinmcelheran I've come across epel when needing a driver before, ironically it was my ethernet driver, so I had to get it manually anyways.
21:55 Jmainguy nice
21:56 oytun joined #gluster
21:56 devinmcelheran Well, thank you very much. Now I've got to go make some supper for the little one, but I'll probably be back at some point tonight. Really, thank you.
21:57 Jmainguy yeah np man, gl
21:59 cvstealth joined #gluster
22:01 social joined #gluster
22:39 devinmcelheran joined #gluster
22:50 ndk joined #gluster
22:51 portante joined #gluster
22:54 RedW joined #gluster
23:23 EinstCrazy joined #gluster
23:28 mlncn joined #gluster
23:31 delhage joined #gluster
23:49 plarsen joined #gluster
23:55 delhage joined #gluster
23:58 oytun joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary