Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 ovaistariq joined #gluster
00:07 baojg joined #gluster
00:16 genisuoftime joined #gluster
00:17 genisuoftime does anyone here know much about gluster shared storage?
00:18 genisuoftime whenever I run  >> gluster volume set all cluster.enable-shared-storage enable only 3 out of 14 nodes have bricks allocated to the gluster_shared_storage volume. this seems to contradict the documentation
00:21 JoeJulian That is correct. 3 bricks for replica 3.
00:21 genisuoftime but my replica count on the volume is 2
00:22 genisuoftime Volume Name: test-volume Type: Distributed-Replicate Volume ID: 77b7e3fc-265a-40cf-be44-a4674b84fc4b Status: Created Number of Bricks: 7 x 2 = 14
00:22 genisuoftime Volume Name: gluster_shared_storage Type: Replicate Volume ID: d2c6aa85-6f88-44d8-94d1-043fd88a232c Status: Started Number of Bricks: 1 x 3 = 3
00:22 JoeJulian That has nothing to do with the shared-storage volume.
00:23 genisuoftime ah
00:23 genisuoftime thankyou
00:23 genisuoftime i was mis-reading the documentation
00:23 genisuoftime well actually
00:23 genisuoftime this paragraph seems to contradict:
00:23 JoeJulian https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/set/post/S32gluster_enable_shared_storage.sh#L37-L46
00:23 glusterbot Title: glusterfs/S32gluster_enable_shared_storage.sh at master · gluster/glusterfs · GitHub (at github.com)
00:24 genisuoftime The volume created is either a replica 2, or a replica 3 volume. This depends on the number of nodes which are online in the cluster at the time of enabling this option and each of these nodes will have one brick participating in the volume. The brick path participating in the volume is /var/lib/glusterd/ss_brick.
00:25 genisuoftime thanks for the source code link
00:25 JoeJulian I can see how that could have been written but is certainly misleading.
00:25 genisuoftime that clears it up for me. the doc is wrong
00:25 JoeJulian Well, it's not if you have a 2 brick volume. :)
00:26 JoeJulian The author was clearly thinking of a replica 1 x 2 volume vs. a 1 x 3+.
00:27 JoeJulian genisuoftime: Do you have the link to that page?
00:27 genisuoftime it's rhel glusterfs :( docs.
00:28 genisuoftime https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Red_Hat_Storage_Volumes-Shared_Volume.html
00:28 glusterbot Title: 10.8. Setting up Shared Storage Volume (at access.redhat.com)
00:28 genisuoftime not sure if that means much to you guys
00:30 calavera joined #gluster
00:34 harish_ joined #gluster
00:35 ghenry joined #gluster
00:42 JoeJulian I guess you would need to have a bug filed against RHS for that.
00:43 chromatin joined #gluster
00:44 chromatin I have a new install on RHEL7 and found that while I can read at about ~1.5GB/sec from the RAID60 array directly, reading the same files from GlusterFS (single node - haven’t joined any peers yet) maxes out around 200 MB/sec — does anyone know what could be going on?
00:47 hackman joined #gluster
00:50 harish_ joined #gluster
00:53 genisuoftime thanks for you help @JoeJulian. I was on on a wild goose chase for a few hours there. will certaintly let them know
01:01 ovaistariq joined #gluster
01:18 chromatin Does anyone have any experience reading from a single brick faster than 200 MB/sec ?
01:18 chromatin I’m bottlenecked about there, whereas the underlying hardware for that brick can do 1-2 GB/sec
01:19 JoeJulian One sec. I think you're hitting a bug...
01:20 amye joined #gluster
01:21 chromatin @JoeJulian Thanks for taking the time to think about this problem. Appreciate it
01:21 anmol joined #gluster
01:21 EinstCrazy joined #gluster
01:23 Lee1092 joined #gluster
01:28 plarsen joined #gluster
01:28 JoeJulian chromatin: Ah, found it. bug 1316327
01:28 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1316327 low, unspecified, ---, vbellur, MODIFIED , Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance.  Fresh install of 3.7.8 also has low write performance
01:29 hagarth JoeJulian: this has been fixed in 3.7.9
01:29 JoeJulian +1
01:29 chromatin JoeJulian: unfortunately I have 200 MB/sec /read/ performance; this bug describes write performance
01:29 om joined #gluster
01:30 JoeJulian I've never had a problem filling my network on reads.
01:30 JoeJulian What version are you using?
01:30 johnmilton joined #gluster
01:31 chromatin JoeJulian: Can you suggest some diagnostics? I am ultra new at GlusterFS and this is only a single node so far - haven’t joined peers yet.  How can I show version? I don’t see a version subcommand for `gluster`
01:31 JoeJulian --version
01:31 JoeJulian Also... whatever version you installed. ;)
01:31 chromatin 3.7.1 on RHEL7
01:32 JoeJulian I would suggest you use the community repos. ,,(latest)
01:32 glusterbot The latest version is available at http://download.gluster.org/pub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
01:32 chromatin I’ll see what I can do. I am the bioinformatician; I have to convince the ops guy to upgrade it.
01:33 JoeJulian Point 'em my way. I'll berate them unmercifully. ;)
01:33 chromatin Still, even being 3.7.1 I can’t imagine how this could happen other than massive overhead at some layer
01:34 chromatin Single RAID60 volume mounted at /rghs/brick0/ ; gluster FUSE mounted at /mnt/gluster . Easy-peasy one  would think.
01:34 JoeJulian How are you getting your number?
01:35 chromatin dd if=file of=/dev/null bs=[4K,128K,1M] (no difference by block size) ; and I made sure to use different files so that nothing is cached.
01:35 chromatin I checked it with dd because I noticed a regular workflow was slow
01:36 JoeJulian yeah
01:37 JoeJulian Nothing seems obvious. A one-brick locally mounted volume should max out something.
01:39 chromatin Thanks for your thoughts. Appreciate it.
01:39 JoeJulian chromatin: I guess you could look at the stats
01:40 JoeJulian gluster volume top
01:40 chromatin Interesting. `gluster top` is not in man gluster(8)
01:41 JoeJulian gluster volume help
01:41 JoeJulian Has way more output than man.
01:44 chromatin gluster volume top volume0 read-perf seems to show a list of files with access times in the last column, but the first column (Mbps) is all zero :|
01:46 baojg joined #gluster
01:48 chromatin Yes, something is obviously not right. the MBps column is always zero in read-perf output
01:48 bennyturns joined #gluster
01:52 JoeJulian I guess I would check my client and brick logs.
01:55 chromatin Well, found at least one cosmetic bug
01:56 chromatin top read-perf reports throughput in “MBps”. However, the brick logs indicate the (true) speed correctly in Mbps
01:58 dlambrig_ joined #gluster
01:58 nbalacha joined #gluster
02:02 ovaistariq joined #gluster
02:06 JoeJulian Probably should use that bs made-up Mibps instead.
02:06 chromatin My log files are filled with error and warning entries of various types ; is this typical? If no, this could be causing the slowdown perhaps
02:06 JoeJulian Warnings, yes. Errors, not so frequently.
02:18 chromatin Most are in the mnt.log and are warnings from FUSE about inability to remove xattrs which is puzzling as I have an XFS backing store. However etc-glusterfs-glusterd log file has lots of errors labeled 0-management about (1) inability to read option for <whatever> key, but also (2) xlator_volopt_dynload error
02:26 chromatin left #gluster
02:26 chromatin joined #gluster
02:34 harish_ joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:50 baojg joined #gluster
02:51 haomaiwang joined #gluster
02:51 ahino joined #gluster
02:55 JoeJulian chromatin: did you write files directly to the brick(s)?
02:56 chromatin JoeJulian: No, all files were written to the FUSE mount
02:56 JoeJulian selinux?
02:57 chromatin I actually do not know how to tell if my copy of RHEL7 (fresh install) is SELinux enabled
02:58 nathwill joined #gluster
02:58 chromatin Yes, looks like it is. sudo getenforce says ”Enforcing”
03:01 haomaiwa_ joined #gluster
03:03 ovaistariq joined #gluster
03:04 JoeJulian A simple test would be "setenforce 0"
03:05 chromatin Is SELinux known to interact negatively with GlusterFS?
03:05 JoeJulian Not necessarily.
03:06 atalur joined #gluster
03:09 chromatin I will disable and do some more testing tomorrow. I genuinely appreciate your help.
03:09 chromatin as in bioinformatics we work with files > 5 GB , 200MByte/sec is a no-go :|
03:10 JoeJulian Yeah, that wouldn't work for most workloads. :D
03:17 anmol joined #gluster
03:20 gbox_ joined #gluster
03:35 nehar joined #gluster
03:51 ramteid joined #gluster
03:51 nathwill joined #gluster
03:52 shubhendu joined #gluster
03:54 atinm joined #gluster
03:59 hgowtham joined #gluster
04:01 nishanth joined #gluster
04:01 haomaiwa_ joined #gluster
04:01 kkeithley1 joined #gluster
04:03 itisravi joined #gluster
04:03 ovaistariq joined #gluster
04:04 dlambrig_ joined #gluster
04:08 RameshN joined #gluster
04:08 overclk joined #gluster
04:10 kanagaraj joined #gluster
04:11 genisuoftime why does the corosync.conf file generated by pcs cluster setup not work.
04:11 genisuoftime Mar 23 15:08:02 n0-gluster1-qh2 corosync[74772]:  [MAIN  ] parse error in config: No interfaces defined Mar 23 15:08:02 n0-gluster1-qh2 corosync[74772]:  [MAIN  ] Corosync Cluster Engine exiting with status 8 at main.c:1278.
04:11 genisuoftime join #corosync
04:12 haomaiwa_ joined #gluster
04:13 nbalacha joined #gluster
04:22 ashiq joined #gluster
04:26 gem joined #gluster
04:28 dlambrig_ joined #gluster
04:35 rastar joined #gluster
04:35 kshlm joined #gluster
04:41 F2Knight joined #gluster
04:43 kshlm joined #gluster
04:44 valkyr1e joined #gluster
04:51 nehar joined #gluster
04:54 om joined #gluster
04:59 F2Knight_ joined #gluster
05:01 haomaiwa_ joined #gluster
05:02 kshlm joined #gluster
05:04 ovaistariq joined #gluster
05:09 prasanth joined #gluster
05:13 Apeksha joined #gluster
05:16 ndarshan joined #gluster
05:18 gowtham joined #gluster
05:28 aravindavk joined #gluster
05:28 Bhaskarakiran joined #gluster
05:38 sakshi joined #gluster
05:39 DV joined #gluster
05:39 poornimag joined #gluster
05:45 kdhananjay joined #gluster
05:45 rafi joined #gluster
05:49 ahino joined #gluster
05:53 karthik___ joined #gluster
05:56 meyang joined #gluster
05:56 Bhaskarakiran joined #gluster
05:57 karnan joined #gluster
05:57 ggarg joined #gluster
06:00 meyang_ joined #gluster
06:01 haomaiwa_ joined #gluster
06:04 DV__ joined #gluster
06:04 vmallika joined #gluster
06:05 ovaistariq joined #gluster
06:05 atalur joined #gluster
06:05 DV__ joined #gluster
06:11 Gnomethrower joined #gluster
06:11 gem joined #gluster
06:13 gem joined #gluster
06:17 DV joined #gluster
06:19 gem joined #gluster
06:19 gem_ joined #gluster
06:20 skoduri joined #gluster
06:26 mhulsman joined #gluster
06:30 om joined #gluster
06:37 dlambrig__ joined #gluster
06:42 liibert joined #gluster
06:43 hchiramm joined #gluster
06:52 ramky joined #gluster
06:54 kshlm joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 atinm joined #gluster
07:02 anil_ joined #gluster
07:02 Manikandan joined #gluster
07:03 camg Why do files removed from a gluster volume reappear within a short time?
07:05 camg Specifically the files that reappear are symlinks
07:06 ovaistariq joined #gluster
07:06 dlambrig_ joined #gluster
07:08 camg This seems to be a known bug: https://bugzilla.redhat.com/show_bug.cgi?id=1140818
07:08 glusterbot Bug 1140818: high, unspecified, ---, bugs, NEW , symlink changes to directory, that reappears on removal
07:10 rafi camg: are they same? the bug and your issue ?
07:10 haomaiwa_ joined #gluster
07:10 gem joined #gluster
07:10 camg rafi: yes it seems identical
07:11 * rafi is reading the description
07:11 camg rafi: I'm running 3.7.6
07:14 kovshenin joined #gluster
07:15 rafi camg: if you have a different reproducer method or anything please update the bug
07:15 camg rafi: OK I just encountered this in the last hour.  What is the status of the bug?
07:15 rafi nbalacha: sakshi: Do you aware of this bug ?
07:16 rafi 1140818
07:16 sakshi rafi, let me check
07:16 rafi camg: I also ;)
07:16 rafi camg: me also
07:17 camg rafi: Sure thanks :)
07:17 rafi camg: we will see how we can help you here
07:19 atinm joined #gluster
07:19 sakshi rafi, nope had not across this bug
07:22 rafi camg: what is the volume type? is it a pure replica or distributed replcated ?
07:23 camg distributed replicated (2 replicas, 2 distributed, 4 bricks)
07:23 gem joined #gluster
07:25 gem_ joined #gluster
07:26 Gnomethrower joined #gluster
07:33 jtux joined #gluster
07:38 [Enrico] joined #gluster
07:39 arcolife joined #gluster
07:49 hackman joined #gluster
08:01 haomaiwang joined #gluster
08:06 ovaistariq joined #gluster
08:10 jri joined #gluster
08:10 dlambrig_ left #gluster
08:11 Daniel_Kanchev joined #gluster
08:12 EinstCra_ joined #gluster
08:27 fsimonce joined #gluster
08:30 kkeithley1 joined #gluster
08:35 aravindavk joined #gluster
08:40 spalai joined #gluster
08:40 DV joined #gluster
08:46 ctria joined #gluster
08:47 ashiq joined #gluster
08:48 ggarg joined #gluster
08:49 nishanth joined #gluster
08:56 shubhendu joined #gluster
08:56 Gnomethrower joined #gluster
08:58 chromatin joined #gluster
08:58 [Enrico] joined #gluster
09:01 haomaiwa_ joined #gluster
09:03 Slashman joined #gluster
09:07 ovaistariq joined #gluster
09:20 skoduri joined #gluster
09:23 shubhendu joined #gluster
09:32 gem joined #gluster
09:39 aravindavk joined #gluster
09:41 legreffier joined #gluster
09:42 legreffier hai
09:42 legreffier I can't find which package will provide gsyncd in CentOS 7
09:42 legreffier 'yum provides' won't help
09:44 Ulrar I had a volume with a replica of 3 and 3 nodes in it. I removed a down brick and added an other one, what is the command to make it replicate the files to the new brick ?
09:44 Ulrar Tried to start a heal but it doesn't seem to be doing much
09:45 Ulrar Ha, need to add full at the end of the command, that might help
09:45 Ulrar Seems to be healing a lot more suddenly :D
09:47 ggarg joined #gluster
09:50 ashiq joined #gluster
09:55 legreffier response to self : it's in a separate package within gluster repo : glusterfs-geo-replication
10:01 haomaiwa_ joined #gluster
10:01 harish_ joined #gluster
10:03 DV joined #gluster
10:06 rastar joined #gluster
10:08 ovaistariq joined #gluster
10:09 mhulsman joined #gluster
10:12 Bhaskarakiran joined #gluster
10:25 robb_nl joined #gluster
10:27 shubhendu joined #gluster
10:34 [Enrico] joined #gluster
10:42 baojg joined #gluster
10:53 ira joined #gluster
10:57 jdarcy joined #gluster
10:58 lanning joined #gluster
11:01 haomaiwa_ joined #gluster
11:01 Wizek joined #gluster
11:09 ovaistariq joined #gluster
11:27 johnmilton joined #gluster
11:28 DV__ joined #gluster
11:37 unclemarc joined #gluster
11:46 ggarg joined #gluster
11:58 wnlx joined #gluster
12:00 gem joined #gluster
12:01 gem joined #gluster
12:01 haomaiwa_ joined #gluster
12:03 kotreshhr joined #gluster
12:06 robb_nl joined #gluster
12:09 ovaistariq joined #gluster
12:13 sakshi joined #gluster
12:15 shaunm joined #gluster
12:18 atalur joined #gluster
12:21 Hesulaan joined #gluster
12:33 m0zes joined #gluster
12:37 atalur joined #gluster
12:39 B21956 joined #gluster
12:40 sabansal_ joined #gluster
12:41 B21956 joined #gluster
12:42 EinstCrazy joined #gluster
12:43 nbalacha joined #gluster
12:44 arcolife joined #gluster
12:46 plarsen joined #gluster
12:47 hackman joined #gluster
12:50 baojg joined #gluster
12:51 RayTrace_ joined #gluster
12:57 haomaiwa_ joined #gluster
12:58 hamiller joined #gluster
12:59 Ulrar So I added another brick, but that time I can't get it to start replicating
13:00 Ulrar I have a replica 3 with 3 nodes, but the third node is empty and even a heal full doesn't seem to start the copy
13:00 arcolife joined #gluster
13:06 RayTrace_ joined #gluster
13:10 ovaistariq joined #gluster
13:11 coredump joined #gluster
13:14 ashiq joined #gluster
13:15 robb_nl joined #gluster
13:16 anil joined #gluster
13:18 DV joined #gluster
13:20 ira joined #gluster
13:24 Ulrar It's weird, it's not even creating the directories in the brick
13:27 bluenemo joined #gluster
13:28 pjrebollo joined #gluster
13:29 jiffin joined #gluster
13:31 pjrebollo Last week I upgraded an instance of Gluster to v3.7.8.  Now I noticed that v3.7.9 was released.  ¿There is any way to upgrade to lastest version without stoping the volumes?
13:35 shyam joined #gluster
13:37 jiffin joined #gluster
13:37 mpietersen joined #gluster
13:38 kdhananjay joined #gluster
13:40 haomaiwa_ joined #gluster
13:42 unclemarc joined #gluster
13:43 robb_nl joined #gluster
13:46 rwheeler joined #gluster
13:47 Jiffin joined #gluster
13:49 atalur pjrebollo, You can try bringing bricks from each node down, upgrading the node, and bringing it back
13:50 atalur pjrebollo, do the same one after another from each node in your cluster
13:50 pjrebollo Is that safe?
13:51 atalur That is how users do it when they don't want to stop the volume
13:51 pjrebollo Ok.
13:55 amye joined #gluster
13:57 kshlm joined #gluster
14:01 haomaiwa_ joined #gluster
14:01 nbalacha joined #gluster
14:02 vmallika joined #gluster
14:07 ggarg joined #gluster
14:08 jri joined #gluster
14:08 bowhunter joined #gluster
14:11 jotun joined #gluster
14:11 Gnomethrower joined #gluster
14:11 ovaistariq joined #gluster
14:16 uebera|| joined #gluster
14:16 uebera|| joined #gluster
14:16 Hesulan joined #gluster
14:16 anoopcs joined #gluster
14:16 [Enrico] joined #gluster
14:16 Kins joined #gluster
14:18 blu_ joined #gluster
14:18 bennyturns joined #gluster
14:18 karnan joined #gluster
14:18 bluenemo joined #gluster
14:18 al joined #gluster
14:19 d0nn1e joined #gluster
14:21 Hesulan joined #gluster
14:28 skylar joined #gluster
14:29 Hesulan joined #gluster
14:29 DV joined #gluster
14:49 Apeksha joined #gluster
14:49 kshlm Weekly community meeting starts in 10 minutes in #gluster-meeting
14:50 Gnomethrower joined #gluster
14:50 gowtham joined #gluster
14:55 jdarcy joined #gluster
14:55 atinm joined #gluster
14:57 atalur joined #gluster
15:01 haomaiwa_ joined #gluster
15:02 Apeksha joined #gluster
15:03 nathwill joined #gluster
15:03 skoduri joined #gluster
15:03 hagarth joined #gluster
15:05 wushudoin joined #gluster
15:12 ovaistariq joined #gluster
15:14 kshlm joined #gluster
15:16 camg joined #gluster
15:20 ShwethaHP joined #gluster
15:26 camg rafi, nbalacha, sakshi: Any luck?
15:27 nbalacha camg, I have not had a chance to look at this yet
15:27 nbalacha camg, sorry about that
15:28 camg ok, I'll update the bug report
15:29 camg for a short term fix the solution in the original report is not practical (stopping the volume & removing the symlinks on the bricks)
15:30 camg I tried disabling self-heal, but that is difficult to do & may not fix the problem anyway
15:31 dlambrig_ joined #gluster
15:32 rafi camg: I tried to reproduce the issue, as per the reproducing step given in the bug
15:32 rafi camg: do you have any thing to add to reproduce the issue quickly ?
15:33 rafi camg: I'm not sure , I understood the repducible method or not
15:33 camg rafi: only that is seems specific to symlinks on a replicated volume (the original was replicated, mine is distributed-replicated)
15:34 camg rm -v symlink; wait 24; ls -l symlink
15:34 atalur joined #gluster
15:34 camg It will always reappear
15:36 camg It seems clear that gluster must handle symlinks in a different manner than regular files.  They don't seem to have gfid for example.
15:38 rafi camg: I will summarize the step, please correct me if i'm wrong
15:38 rafi 1) Create a directory and fill some data
15:38 rafi 2) create a symlink and delete it
15:38 camg yes exactly
15:39 camg deleting through the fuse client
15:39 rafi camg: okey
15:40 camg also through the fuse client on the nodes themselves.  I am trying it from another host
15:41 camg yes it comes back
15:41 camg 3) wait between 2-20 seconds
15:42 karnan joined #gluster
15:42 camg 4) ls or stat the symlink -- is it there?
15:44 F2Knight joined #gluster
15:48 camg Actually directories don't have gfid but they do have trusted.glusterfs.dht
15:49 hamiller joined #gluster
15:49 nbalacha camg, dirs should have gfid
15:50 camg Um this particular directory (the symlink target) has a gfid on the main brick but not the distributed brick
15:50 camg nbalacha: yes thanks, confusing myself :)
15:51 camg but for distributed volumes it seems the directory only has a gfid on one brick?
15:51 camg and  trusted.glusterfs.dht on both (with different values)
15:52 camg nevermind now it has it on both!?
15:52 nbalacha camg, you should see a dir on every brick
15:53 nbalacha camg, the dir will also have the gfid on every brick and that must be the same
15:53 nbalacha camg however the trusted.glusterfs.dht values may be different
15:54 camg nbalacha: Yes but I have had a problem with this volume and gfid (both mismatch and missing)
15:54 nbalacha camg, that can cause problems - we should fix that
15:54 JoeJulian That's when I look for the client that's not connecting to all the bricks.
15:54 kshlm joined #gluster
15:56 camg Hi Joe!  peer status always shows connected.  Is there a client side version of peer status?
16:01 haomaiwa_ joined #gluster
16:02 Gnomethrower joined #gluster
16:02 camg netcat checks out too, even though the "input/output error" was occurring (during my earlier incident with mismatched gfid)
16:03 nehar joined #gluster
16:05 Norky joined #gluster
16:07 nbalacha camg, did you try to rename/delete the dir?
16:10 camg nbalacha:  I can rename, but then a symlink with the original name appears (so there will be two)
16:12 camg This bug presents the exact same scenario: https://bugzilla.redhat.com/show_bug.cgi?id=1140818
16:12 glusterbot Bug 1140818: high, unspecified, ---, bugs, NEW , symlink changes to directory, that reappears on removal
16:12 ovaistariq joined #gluster
16:14 camg rafi, nbalacha: Can you reproduce the bug?
16:17 MrAbaddon joined #gluster
16:19 sage joined #gluster
16:37 camg https://bugzilla.redhat.com/show_bug.cgi?id=1295360
16:37 glusterbot Bug 1295360: urgent, urgent, ---, rkavunga, MODIFIED , [Tier]: can not delete symlinks from client using rm
16:39 camg This seems similar.  Since tiering uses "DHT of DHT" this could have the same cause, yes?
16:46 camg Wow even if I remove the directory "above" the symlink, gluster recreates the directory AND the symlink (but not the renamed symlink)
16:50 kanagaraj joined #gluster
16:53 vmallika joined #gluster
16:53 calavera joined #gluster
16:55 camg nbalacha: Which dir were you referring to?  The target of the symlink?  The symlink?
17:01 haomaiwa_ joined #gluster
17:04 karnan joined #gluster
17:04 kovshenin joined #gluster
17:06 RameshN joined #gluster
17:13 ovaistariq joined #gluster
17:19 emitor joined #gluster
17:20 nbalacha camg, sorry - stepped away
17:20 nbalacha cmg
17:21 nbalacha camg, the target of the symlink
17:21 nbalacha camg, how may clients are accessing the dir?
17:22 pjrebollo joined #gluster
17:34 armyriad joined #gluster
17:36 Hesulan joined #gluster
17:40 emitor Hi, I had some problems if I enable quota in a gluster 3.7.6volume. It start to appear the message "...[marker-quota.c:483:mq_get_set_dirty]... failed to get inode ctx for..." on the brick log message
17:41 emitor do you know why is this happens?
17:41 camg nbalacha: at least 10 clients have that volume mounted
17:41 camg nbalancha: or do you mean are there processes with that dir as CWD?
17:42 shubhendu joined #gluster
17:42 nbalacha camg, is the problem with only a particular dir? Or does it happen if you create a new test dir on the vol
17:42 nbalacha camg, no I meant any clients that might be accessing that dir or its contents
17:44 camg any dir
17:44 camg all clients
17:51 hchiramm joined #gluster
18:00 camg nbalacha: There is something unique about the unremovable symlinks.  They were copied into the gluster volume from another host (via rsync).
18:01 camg I cannot create new unremovable symlinks in a different location in the volume
18:01 haomaiwa_ joined #gluster
18:01 nbalacha camg, do you have steps to reproduce this issue?
18:02 camg nbalacha: I will try.
18:07 camg nbalacha, ravi: I will try to reproduce the bug with new symlinks.  Regarding the unremovable symlinks, would it help to turn logging to DEBUG?
18:07 nbalacha camg, that is worth a try
18:08 hagarth joined #gluster
18:08 nbalacha camg, can you also provide the listing and xattrs on both the unremovable symlink and its target dir from the backedn bricks?
18:08 camg The files in bug 1140818 were also copied in via rsync
18:08 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1140818 high, unspecified, ---, bugs, NEW , symlink changes to directory, that reappears on removal
18:09 camg nbalacha: the symlinks seem to only have selinux xattrs and getfattr goes into a loop on the symlinks
18:10 nbalacha camg, that is strange
18:10 camg nbalacha: Yes I will look at the xattr fr the target directory on each brick.  So far I have been looking at the distributed bricks
18:10 ivan_rossi joined #gluster
18:10 deniszh joined #gluster
18:11 camg nbalacha:  Ah the loop is due the recursive functionality of getfattr
18:12 camg And I didn't notice that getfattr has a -h option for no-dereference
18:13 nbalacha camg, I will be signing off now - it is almost midnight.
18:13 camg nbalacha:  One distributed brick has a trusted.glusterfs.dht.linkto xattr for the symlink but not the other
18:14 nbalacha camg, if you can let us know how to reproduce the issue, we can take a look at it tomorrow
18:14 camg nbalacha: Ah yes, thanks for your time and consideration!
18:14 ovaistariq joined #gluster
18:15 nbalacha camg, anytime. :)
18:15 camg nbalacha:  Ha, it was midnight here (west coast north america) when I asked about the bug!
18:15 camg nbalacha: Goodnight
18:19 dlambrig_ joined #gluster
18:23 amye joined #gluster
18:42 mhulsman joined #gluster
18:52 kanagaraj joined #gluster
18:54 glusterbot` joined #gluster
18:56 unforgiven512 joined #gluster
18:57 legreffier joined #gluster
18:57 unforgiven512 joined #gluster
18:58 unforgiven512 joined #gluster
18:58 cliluw joined #gluster
18:58 unforgiven512 joined #gluster
18:59 unforgiven512 joined #gluster
18:59 unforgiven512 joined #gluster
19:00 unforgiven512 joined #gluster
19:01 haomaiwang joined #gluster
19:01 unforgiven512 joined #gluster
19:02 samikshan joined #gluster
19:07 MrAbaddon joined #gluster
19:14 plarsen joined #gluster
19:14 gessitin joined #gluster
19:15 ovaistariq joined #gluster
19:15 bennyturns joined #gluster
19:16 Wizek_ joined #gluster
19:17 dtrainor_ joined #gluster
19:19 Philambdo joined #gluster
19:19 sankarshan_away joined #gluster
19:19 armyriad joined #gluster
19:20 liibert joined #gluster
19:20 Iouns joined #gluster
19:21 dastar joined #gluster
19:22 robb_nl joined #gluster
19:25 Champi joined #gluster
19:25 renout joined #gluster
19:26 virusuy joined #gluster
19:33 hackman joined #gluster
19:43 calavera joined #gluster
19:59 luizcpg joined #gluster
20:01 haomaiwa_ joined #gluster
20:05 cliluw joined #gluster
20:15 ovaistariq joined #gluster
20:28 primehaxor joined #gluster
20:29 primehaxor joined #gluster
20:53 F2Knight joined #gluster
20:58 amye joined #gluster
21:01 haomaiwa_ joined #gluster
21:05 calavera joined #gluster
21:13 rjoseph|afk joined #gluster
21:14 anil joined #gluster
21:14 shruti joined #gluster
21:15 sac joined #gluster
21:16 ovaistariq joined #gluster
21:16 deniszh joined #gluster
21:17 deniszh1 joined #gluster
21:22 calavera joined #gluster
21:25 shyam joined #gluster
21:32 msvbhat joined #gluster
21:32 shruti joined #gluster
21:34 sac joined #gluster
21:35 bennyturns joined #gluster
21:38 plarsen joined #gluster
21:40 anil joined #gluster
21:42 lalatenduM joined #gluster
21:50 F2Knight joined #gluster
22:01 haomaiwa_ joined #gluster
22:04 misc joined #gluster
22:08 btspce joined #gluster
22:08 DV joined #gluster
22:21 F2Knight joined #gluster
22:25 btspce joined #gluster
22:28 mowntan I have a small (3-node) gluster cluster and day I noticed that I have a split brain on the .trash folder... what is the trash folder, and can I fix this easily?
22:29 mowntan I've read this: https://gluster.readthedocs.org/en/latest/Troubleshooting/split-brain/
22:29 glusterbot Title: Split Brain - Gluster Docs (at gluster.readthedocs.org)
22:30 mowntan Haha, thanks glusterbot.. read the doc... there has got to be an easier way to fix this issue
22:32 mz_ joined #gluster
22:33 btspce joined #gluster
22:34 btspce anyone ?
22:35 mzink joined #gluster
22:38 mzink left #gluster
22:39 gbox mowntan:  How do you know the .trash folder is in split-brain?  Does "gluster volume heal <VOLUMENAME> info" list that dir?
22:39 gbox mowntan: gluster has a .trashcan directory but perhaps your .trash folder is for GNOME or KDE?
22:41 gbox mowntan: split-brain indicates one copy is correct and the other incorrect.  Since it's .trash you could simply delete that directory on one brick.  3-node?  Is it distributed or distributed-replicated?
22:46 mowntan gbox: ya, I'm running "gluster volume heal <volume> info"
22:46 gbox mowntan: sometimes it lists the actual path or else it lists the gfid
22:47 mowntan its just a replicated volume
22:48 gbox mowntan: does one node have 2 bricks?
22:48 gbox mownan: or 3-replicate?
22:48 gbox mowntan: or 3-replicate?
22:50 btspce joined #gluster
22:51 btspce can anyone help solving a glusterd install that wont start after upgrade to 3.7.9??
22:53 amye joined #gluster
22:53 cliluw joined #gluster
23:01 haomaiwang joined #gluster
23:01 misc joined #gluster
23:02 mowntan gbox: 3-replicate
23:04 gbox mowntan: Do you need anything in that dir?
23:04 mowntan gbox: nope... from what I am reading, it looks like a gluster provided directory
23:05 gbox mowntan: Is it .trashcan or .trash?
23:05 mowntan it's ".trashcan"
23:06 mowntan It's documented here: http://www.gluster.org/community/documentation/index.php/Features/Trash
23:06 mowntan a feature of 3.7.0
23:06 gbox mowntan: it's .trashcan
23:07 mowntan gbox: yes
23:08 btspce joined #gluster
23:08 mowntan gbox: from the docs - The name for trash directory is user configurable option and its default value is ".trashcan".It can be configured only when volume is started.We cannot remove and rename the trash directory from the mount(like .glusterfs directory)
23:08 gbox mowntan: Sure.  There's a translator just for that new feature.
23:10 mowntan gbox: any idea how to resolve the split-brain on .trashcan
23:10 gbox mowntan: Do you want that feature?  You could disable it.  Since it's new and sort of a catch-all for files it may lead to split-brain more than usual.
23:11 gbox mowntan: Pick a brick.  Delete all the files in .trashcan on the other 2 bricks.  Then it will automatically resolve.  Double-check there's nothing in there you care about.
23:11 mowntan Reading the doc, it's a nice feature
23:11 gbox mowntan: Or you could go through the documented steps (which will take more time).
23:13 gbox mowntan:  Is it the directory itself that's split-brain or files in it?
23:15 mowntan gbox: It looks like its the directory itself
23:16 mowntan gbox: no files in the directory (in any of the bricks)
23:16 gbox mowntan: So you'd have to delete the entire directory on two bricks or decode the discrepancy.  Might be worth it to understand the changelog structure.
23:17 mowntan I dont mind providing the changelog if you want to see how I got into this state
23:17 mowntan gbox: I dont mind providing the changelog if you want to see how I got into this state
23:18 mowntan gbox: looks to me to be a bug
23:18 mowntan gbox: do you have some docs on how to do that?
23:21 mowntan gbox: sorry I mistakenly said that there was not data in any of the bricks, but that was through nfs. If I look a that raw brick on one of the servers I see a "internal_op" directory
23:25 mowntan gbox: I've fixed the issue, I remove the directory "internal_op" and started a heal. Looking at documentation, and it looks like this is related to ovirt (this volume is an ovirt data store)
23:27 gbox mowntan:  OK yeah it's always confusing with these hidden directories.
23:28 mowntan gbox: all is well with the world again... thanks gbox
23:28 gbox mowntan: yes good luckc
23:29 calavera joined #gluster
23:29 mowntan gbox: I will say this though, after reading the split-brain docs again, it's a terribly confusing process...glad I didnt have to run through it
23:31 SpeeR joined #gluster
23:33 SpeeR I just installed gluster and gluster-fuse on a rhel box, and it looks like it didn't install? Mar 23 16:27:24 Installed: glusterfs-3.7.9-1.el6.x86_64
23:34 SpeeR mount -t glusterfs prod-datastore1:/qa_datastore /mnt/temp-qa-fsstore
23:34 SpeeR mount: unknown filesystem type 'glusterfs'
23:39 gbox mowntan: the changelog is a nice design.  Some summary scripts would be helpful.  gluster volume heal <V> info is very rudimentary
23:43 misc joined #gluster
23:45 gbox SpeeR: Did you use dnf, yum, or rpm to install?  Might be a dependency issue.  gluster needs lots of fuse libraries
23:46 gbox SpeeR: Do you have fuse and fuseblk in /proc/filesystems?
23:49 mowntan gbox: I like gluster, but I'm curious to see what Redhat does to help with the operational toolset, it does lack some intuitive tools for management. For now it's just in our lab/testing environment.
23:49 mowntan gbox: thanks again for your help
23:50 chromatin I asked last night and @JoeJulian was helpful but we were unsuccessful. Does anyone know why if I can read from a brick at 1-2 GByte/sec, reading from the gluster volume (single node, no peers) maxes out at 200 MB/sec ? It is going to be a deal killer
23:52 gbox mowntan:  Sure yeah take your time.  I waited until 3.7 as well and it quickly imploded on me.  Redhat has high expectations for users.  ansible could help.
23:55 SpeeR sorry, using yum
23:55 SpeeR rpm -qa |grep fuse
23:55 SpeeR fuse-libs-2.8.3-4.el6.x86_64
23:55 SpeeR fuse-2.8.3-4.el6.x86_64
23:56 mowntan joined #gluster
23:56 mowntan joined #gluster
23:56 mowntan joined #gluster
23:59 gbox SpeeR: modprobe fuse

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary