Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 nathwill joined #gluster
00:16 JesperA joined #gluster
00:39 cliluw joined #gluster
00:47 cliluw joined #gluster
01:01 MugginsM joined #gluster
01:02 RameshN joined #gluster
01:12 MugginsM joined #gluster
01:18 MugginsM joined #gluster
01:22 MugginsM joined #gluster
01:27 amye joined #gluster
01:30 EinstCrazy joined #gluster
01:32 harish joined #gluster
01:46 EinstCra_ joined #gluster
01:52 EinstCrazy joined #gluster
01:55 EinstCrazy joined #gluster
01:57 EinstCrazy joined #gluster
01:58 EinstCra_ joined #gluster
02:02 EinstCrazy joined #gluster
02:10 harish joined #gluster
02:14 MugginsM joined #gluster
02:28 sakshi joined #gluster
02:33 EinstCrazy joined #gluster
02:34 Lee1092 joined #gluster
02:40 amye joined #gluster
02:49 harish joined #gluster
02:53 ahino joined #gluster
02:54 hchiramm joined #gluster
03:03 nathwill joined #gluster
03:19 MugginsM joined #gluster
03:39 itisravi joined #gluster
03:39 atinm joined #gluster
03:40 nbalacha joined #gluster
03:42 nishanth joined #gluster
03:44 kshlm joined #gluster
03:48 amye joined #gluster
03:53 MugginsM joined #gluster
03:53 raghug joined #gluster
03:53 nehar joined #gluster
03:58 haomaiwang joined #gluster
03:58 haomaiwang joined #gluster
04:01 haomaiwang joined #gluster
04:08 hagarth joined #gluster
04:22 Gnomethrower joined #gluster
04:31 MugginsM joined #gluster
04:36 shubhendu joined #gluster
04:53 mchangir joined #gluster
05:01 haomaiwang joined #gluster
05:01 itisravi joined #gluster
05:02 prasanth joined #gluster
05:04 rafi joined #gluster
05:05 jiffin joined #gluster
05:07 jiffin1 joined #gluster
05:08 poornimag joined #gluster
05:10 ndarshan joined #gluster
05:17 pur joined #gluster
05:18 jiffin joined #gluster
05:23 karthik___ joined #gluster
05:27 Bhaskarakiran joined #gluster
05:33 k4n0 joined #gluster
05:36 overclk joined #gluster
05:40 gowtham joined #gluster
05:41 gem joined #gluster
05:41 ecoreply joined #gluster
05:43 Manikandan joined #gluster
05:44 hgowtham joined #gluster
05:49 gem_ joined #gluster
05:50 Apeksha joined #gluster
05:51 Siavash joined #gluster
05:51 Siavash joined #gluster
05:58 Manikandan joined #gluster
06:00 rastar joined #gluster
06:02 haomaiwang joined #gluster
06:03 spalai joined #gluster
06:08 MugginsM joined #gluster
06:10 haomaiwang joined #gluster
06:10 aspandey joined #gluster
06:15 kdhananjay joined #gluster
06:15 arcolife joined #gluster
06:17 skoduri joined #gluster
06:18 Gnomethrower joined #gluster
06:19 rafi joined #gluster
06:21 aravindavk joined #gluster
06:26 kotreshhr joined #gluster
06:27 karnan joined #gluster
06:30 jtux joined #gluster
06:34 ashiq joined #gluster
06:41 hackman joined #gluster
06:49 jiffin1 joined #gluster
06:49 k4n0 joined #gluster
06:52 [Enrico] joined #gluster
06:52 ramky joined #gluster
06:55 unlaudable joined #gluster
06:59 hchiramm joined #gluster
06:59 wnlx joined #gluster
07:01 haomaiwang joined #gluster
07:02 jiffin1 joined #gluster
07:07 atalur joined #gluster
07:19 [Enrico] joined #gluster
07:21 Bhaskarakiran joined #gluster
07:32 mowntan joined #gluster
07:33 aravindavk joined #gluster
07:34 fsimonce joined #gluster
07:41 jiffin1 joined #gluster
07:45 ctria joined #gluster
07:46 jiffin1 joined #gluster
07:47 sage joined #gluster
07:48 anil_ joined #gluster
07:52 ivan_rossi joined #gluster
07:52 jiffin1 joined #gluster
07:55 karthik___ joined #gluster
08:01 haomaiwang joined #gluster
08:02 Mmike joined #gluster
08:02 kovshenin joined #gluster
08:03 jiffin1 joined #gluster
08:03 Mmike joined #gluster
08:05 ctria joined #gluster
08:08 paul98 joined #gluster
08:09 tdasilva joined #gluster
08:12 nbalacha joined #gluster
08:32 skyrat JoeJulian, hi. The split-brain I was solving on Tuesday is gone. Thank you for the information. Just one little thing: is there a command to quickly detect a split brain? when we are in a split brain, the heal info command takes usually abyt 40 minutes to finish, depending on the number of files in split brain
08:34 Slashman joined #gluster
08:43 d0nn1e joined #gluster
08:43 Manikandan_ joined #gluster
08:46 robb_nl joined #gluster
09:01 sakshi joined #gluster
09:01 haomaiwang joined #gluster
09:05 Wizek joined #gluster
09:10 cyberbootje joined #gluster
09:13 arcolife joined #gluster
09:14 aravindavk joined #gluster
09:16 atinm joined #gluster
09:17 jiffin1 joined #gluster
09:19 nbalacha joined #gluster
09:23 mchangir joined #gluster
09:36 p8952 joined #gluster
09:38 muneerse joined #gluster
09:45 muneerse joined #gluster
09:49 muneerse2 joined #gluster
09:58 muneerse joined #gluster
10:00 karthik___ joined #gluster
10:01 haomaiwang joined #gluster
10:01 muneerse2 joined #gluster
10:05 muneerse joined #gluster
10:08 shubhendu joined #gluster
10:08 muneerse2 joined #gluster
10:12 aravindavk joined #gluster
10:12 muneerse joined #gluster
10:16 muneerse2 joined #gluster
10:20 nbalacha joined #gluster
10:22 muneerse joined #gluster
10:45 atinm joined #gluster
10:46 hackman joined #gluster
10:50 atalur itisravi++
10:50 glusterbot atalur: itisravi's karma is now 6
10:52 shubhendu joined #gluster
10:55 jiffin1 joined #gluster
10:57 gem joined #gluster
10:57 gem_ joined #gluster
11:10 karthik___ joined #gluster
11:10 johnmilton joined #gluster
11:50 Lee1092 joined #gluster
11:57 hackman joined #gluster
11:57 jiffin1 joined #gluster
12:01 Marbug left #gluster
12:04 ahino joined #gluster
12:11 karnan joined #gluster
12:11 amye joined #gluster
12:13 atalur joined #gluster
12:15 kotreshhr joined #gluster
12:32 jiffin1 joined #gluster
12:36 unclemarc joined #gluster
12:41 plarsen joined #gluster
12:41 kkeithley joined #gluster
12:43 RameshN joined #gluster
12:47 kotreshhr left #gluster
13:10 kovshenin joined #gluster
13:22 jiffin1 joined #gluster
13:26 mpingu joined #gluster
13:27 jiffin1 joined #gluster
13:27 spalai left #gluster
13:31 B21956 joined #gluster
13:33 jiffin1 joined #gluster
13:42 haomaiwang joined #gluster
13:44 unlaudable joined #gluster
13:45 skylar joined #gluster
13:46 jiffin1 joined #gluster
13:46 haomaiwang joined #gluster
13:49 jiffin joined #gluster
13:52 papamoose joined #gluster
13:56 jiffin1 joined #gluster
14:00 papamoose joined #gluster
14:00 nehar joined #gluster
14:01 haomaiwang joined #gluster
14:01 d0nn1e joined #gluster
14:09 EinstCrazy joined #gluster
14:10 jiffin1 joined #gluster
14:13 jobewan joined #gluster
14:20 gowtham joined #gluster
14:32 EinstCrazy joined #gluster
14:38 wushudoin joined #gluster
14:40 Vaelater1 joined #gluster
14:41 mchangir joined #gluster
14:49 bennyturns joined #gluster
14:51 skyrat Is there a command to quickly detect a split brain? when we are in a split brain, the heal info command takes usually about 40 minutes to finish, depending on the number of files in split brain
14:55 shaunm joined #gluster
15:01 haomaiwang joined #gluster
15:07 DV__ joined #gluster
15:07 atinm joined #gluster
15:07 julim joined #gluster
15:13 nbalacha joined #gluster
15:13 robb_nl joined #gluster
15:14 Wojtek skyrat: For split-brains, I never use gluster heal info. I have a stat `find /gluster/path` that runs in parralel. Any stderr with input/output error is a splitbrain, which we then fix
15:15 level7 joined #gluster
15:15 drowe Wojtek: whats your goto method for resolving split-brains?  We have 3 instances in quorum, but seems occasionally things don't get resolved?
15:16 kpease joined #gluster
15:29 Wojtek We have a process that through ssh calls all bricks that have the file and do a file validation (md5sum for example) to determine which brick has the correct file. The bad data is deleted from the brick. Then a fopen on the gluster volume for the file triggeres a resync to the empty brick and the file is healthy again. If all bricks have good data, but the file is still in split, we stream
15:29 Wojtek the data to a file on /tmp, delete all files from all bricks, and copy the file back to the gluster volume to create a new file. (instead of attempting to patch the extended attributes)
15:39 F2Knight joined #gluster
15:40 dblack joined #gluster
15:46 jiffin1 joined #gluster
15:50 jiffin joined #gluster
15:52 Marbug joined #gluster
16:01 haomaiwang joined #gluster
16:04 jobewan joined #gluster
16:08 level7 joined #gluster
16:09 JoeJulian skyrat: Please file a bug about that. There's no way a heal info should be taking more than a few seconds.
16:09 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
16:16 F2Knight joined #gluster
16:22 kkeithley_q joined #gluster
16:27 jgrimmett joined #gluster
16:28 jgrimmett hello all
16:30 jgrimmett i am looking for someone that might be interested in doing some gluster support assistance... paid of course
16:32 JoeJulian There's a link for that on the web site.
16:32 JoeJulian We only help for free in here.
16:33 ndevos https://www.gluster.org/consultants/
16:33 glusterbot Title: Professional Support Gluster (at www.gluster.org)
16:34 ndevos but if you are not in Europe, it'll be more difficult to get one of those companies to support you (well Red Hat is pretty global of course)
16:35 jgrimmett i did not know if it was appropriate for me to request for help here
16:35 ndevos also, in The Netherlands we have at least proxy.nl and kratz.nl that support some of their customers with Gluster
16:36 jgrimmett we trying to use rdma/infiniband
16:36 ndevos you can surely ask for help here, but we dont give any guarantees about the result, or response times :)
16:36 jgrimmett lol
16:36 jgrimmett fair enough
16:36 ivan_rossi left #gluster
16:36 jgrimmett well
16:37 jgrimmett this is what happens
16:37 nathwill joined #gluster
16:37 jgrimmett from the client:
16:37 ndevos most people that use rdma/ib do that with IB-over-IP
16:37 jgrimmett [root@cb-las-p1c1h3 mnt]# mount -t glusterfs -o transport=rdma cb-las-p1c1ps1gfs1:/data/ps1/gv0 /mnt/tmp Mount failed. Please check the log file for more details. [root@cb-las-p1c1h3 mnt]#
16:38 jgrimmett yes we have IPs in place
16:38 natarej joined #gluster
16:38 jgrimmett and RDMA is configured properly
16:38 jgrimmett but the mount fails
16:38 jgrimmett on the server side this is what the log says:
16:38 ndevos oh, if you use IP-over-IB (right order now?), you do not need to specify transport=rdma
16:39 JoeJulian I'm just going to toss this out there now since I'm getting the impression you're new to IRC. Be sure to use a ,,(paste) service if you need to share logs or anything like that instead of pasting in channel.
16:39 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
16:40 ndevos the transport=rdma is intended for the native rdma support, but that is more difficult to use (and less tested) than a full IP-o-IB configuration
16:40 jgrimmett i apologize for the pasting
16:40 JoeJulian You're good so far, I just sensed a wall of text coming on. :)
16:40 jgrimmett you would have been correct
16:40 jgrimmett im new to IRC as well... never really used it either
16:41 JoeJulian No worries. It's just like slack without the emoji.
16:41 ndevos I dont have much experience with the rdma stuff, but some others do, I think kkeithley is one of them
16:41 MikeLupe joined #gluster
16:42 jgrimmett so.... @ndevos - we were successful using tcp
16:42 jgrimmett but performance was not very good
16:42 jgrimmett only about 300-400MB/s
16:43 ndevos no idea what kind of performance is to be expected from it...
16:43 jgrimmett when we tested NFS which was attached to an iSCSI array... did about 500-600MB/s
16:46 jgrimmett based on what you are saying, i think we are going to go test more with IPoIB
16:46 atinm joined #gluster
16:48 JoeJulian what kind of performance? If that's write performance to a replicated volume, perhaps that's correct.
16:49 jgrimmett no...that was write to a RAID10 ssd volume on a Dell MD1220
16:49 JoeJulian So yes, write.
16:49 JoeJulian And you're using a 1 brick gluster volume?
16:49 jgrimmett yes
16:50 JoeJulian Not trying to be difficult with this question but... why?
16:51 JoeJulian Ah, shoot. I've got a call with WiWynn shortly and I haven't made my coffee yet. I'm going to need it before this call <sigh>. brb.
16:58 jgrimmett well... we are trying to test performance using gluster over RDMA between our host machines and the gluster server
16:58 JoeJulian Oh, right. That makes sense.
16:59 jgrimmett left #gluster
16:59 JoeJulian Oh, lookie what I found when I was referencing back to ask you about the log... what's the name of the volume you created?
17:01 haomaiwang joined #gluster
17:03 jiffin1 joined #gluster
17:04 natarej joined #gluster
17:06 jiffin1 joined #gluster
17:08 hagarth joined #gluster
17:09 ndevos jgjorgji: I know that bennyturns did some tests with rdma once, and I think he used multiple fuse mounts (on the same client) to increase the bandwidth usage
17:10 ndevos uh, sorry jgjorgji, that was for jgrimett who just disconnected :-/
17:11 jgrimmett_ joined #gluster
17:11 jgrimmett_ i apologize
17:11 jgrimmett_ i lost my connection a few mins ago
17:12 jgrimmett_ so i did not get any of the messages that were send after my last chat
17:12 ndevos no problem, that is why we have logs :)
17:12 ndevos https://botbot.me/freenode/gluster/
17:13 jgrimmett_ thanks!
17:15 kkeithley gluster uses the RDMA connection manager to make connections. And/or uses the TCP connection to exchange rdma info. I would think if you can mount using the IP-over-IB TCP addresses and IB/RDMA is generally working, then you should be able to create a volume with transport=rdma and it would just work.
17:15 jgrimmett_ ok
17:15 jgrimmett_ i'll give that a try
17:15 kkeithley If that's not the case, then all I can suggest atm is see if you can get help from the India devs.
17:16 jgrimmett_ definitely
17:16 jgrimmett_ we're goin to try that in just a bit..thanks for the tip!
17:17 scubacuda joined #gluster
17:20 kkeithley ...and it would just work. Unless it somehow got broken along the way. BTW, I didn't see you say what version of gluster you're using.
17:20 jgrimmett_ 3.7
17:20 jgrimmett_ in general, what is the biggest factor when it comes to gluster performance?
17:21 kkeithley lots of small files, and lots of stat/fstat calls will hurt gluster performance.
17:22 kkeithley PHP does a stat on every include. Kills gluster. There are some tricks to make it better.
17:22 kkeithley @php
17:22 glusterbot kkeithley: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
17:22 jgrimmett_ we were planning to use it to store VMs in a KVM environment
17:23 kkeithley quite a few people do that.
17:23 Chr1st1an joined #gluster
17:25 JoeJulian jgrimmett_: what's the name of the volume you created?
17:25 jgrimmett_ gv0
17:28 JoeJulian The mount command you posted earlier you tried to mount "server:/data/ps1/gv0". You need to use the volume name, not a brick name, ie. "server:/gv0"
17:28 jgrimmett_ lemme give it a try
17:28 jgrimmett_ thanks!
17:29 jgrimmett_ holy shit it worked!
17:29 jgrimmett_ sorry!
17:29 jgrimmett_ holy $@%^ it worked
17:29 jgrimmett_ it mounted!
17:29 jgrimmett_ vah-hoo!
17:31 JoeJulian :D
17:36 vshankar joined #gluster
17:36 jiffin1 joined #gluster
17:40 dlambrig_ joined #gluster
17:51 dlambrig_ joined #gluster
17:52 sage joined #gluster
17:54 cliluw joined #gluster
17:54 spalai joined #gluster
18:01 haomaiwang joined #gluster
18:05 andy-b joined #gluster
18:25 kovsheni_ joined #gluster
18:36 jgrimmett_ sorry, had to bring wife to work
18:36 jgrimmett_ now to get the other hosts to connect to the volume as well
18:44 Slashman joined #gluster
18:50 level7 joined #gluster
19:01 haomaiwang joined #gluster
19:07 wushudoin joined #gluster
19:13 kpease joined #gluster
19:21 kpease joined #gluster
19:22 MikeLupe joined #gluster
19:38 JesperA- joined #gluster
19:38 mowntan joined #gluster
19:39 mowntan joined #gluster
19:39 mowntan joined #gluster
19:45 dlambrig_ joined #gluster
19:45 jiffin joined #gluster
19:46 jiffin1 joined #gluster
19:51 jiffin joined #gluster
20:01 haomaiwang joined #gluster
20:27 shyam left #gluster
21:01 haomaiwang joined #gluster
21:02 johnmilton joined #gluster
21:24 johnmilton joined #gluster
21:41 bennyturns joined #gluster
21:45 ninjaryan joined #gluster
21:53 shersi joined #gluster
21:56 shersi Hi All, i am running two node replicated volume glusterfs 3.7.10 on CenOS7. I'm facing an issue when ever glusterfs server runs selfheal very 10mins, the clients can not read/write or access mounted glusterfs volume.
21:56 shersi Any help will be appreciated it.
22:01 haomaiwang joined #gluster
22:14 natarej_ joined #gluster
22:21 Biopandemic joined #gluster
22:25 JoeJulian 3.7.10 had a problem. If you disable client-side self-heal it should be better. gluster volume set $vol cluster.data-self-heal off
22:27 squizzi joined #gluster
22:41 skyrat Wojtek, thanks, yes you're right, I also use this approach to determine the real files in split-brain. But it's not a solution for a quick check I'm looking for.
22:52 skyrat JoeJulian, ok, I will file a bug. Thanks
22:52 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
22:59 davpenguin joined #gluster
22:59 davpenguin hi and good night for me from spain
23:00 davpenguin what do you think about this error on gluster version 3.7.11 0-management: Unable to acquire lock for glstvol01
23:00 davpenguin glstvol01 is the name of volume
23:01 haomaiwang joined #gluster
23:02 JoeJulian It means it was unable to acquire a management lock for that volume. Could be for a volume change, or I believe that's also used for "heal info"
23:02 JoeJulian If you have a cron job pulling heal info, I think that could happen.
23:02 JoeJulian I also believe I heard that mentioned as a bug that's fixed in the recent or upcoming release.
23:21 davpenguin umm thanks JoeJulian
23:21 davpenguin thats me happen from the versión 3.7.8 and i think that resolve with the 3.7.11 but this is not resolve
23:22 davpenguin do you think that is a great problem?
23:22 JoeJulian Right, 3.7.12 is what I heard. And no, it should not be a problem at all.
23:24 dlambrig_ joined #gluster
23:28 davpenguin ok JoeJulian thank thank for your help
23:56 d0nn1e joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary