Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:21 plarsen joined #gluster
00:44 portante joined #gluster
00:44 kkeithley joined #gluster
00:45 ndk joined #gluster
01:01 al joined #gluster
01:18 DV__ joined #gluster
01:21 haomaiwang joined #gluster
01:31 EinstCrazy joined #gluster
01:33 MugginsM joined #gluster
01:38 Gnomethrower joined #gluster
01:41 rafi joined #gluster
01:46 bennyturns joined #gluster
01:59 Lee1092 joined #gluster
02:02 ndk joined #gluster
02:08 EinstCrazy joined #gluster
02:08 harish_ joined #gluster
02:10 sakshi joined #gluster
02:12 luizcpg joined #gluster
02:21 gem joined #gluster
02:31 jockek joined #gluster
02:42 mchangir_ joined #gluster
02:55 RameshN joined #gluster
02:56 luizcpg joined #gluster
02:57 aravindavk joined #gluster
02:57 mowntan joined #gluster
03:13 Gnomethrower joined #gluster
03:24 nehar joined #gluster
03:28 EinstCrazy joined #gluster
03:29 MugginsM joined #gluster
03:34 atinm joined #gluster
03:41 nbalacha joined #gluster
03:46 EinstCra_ joined #gluster
03:54 itisravi joined #gluster
03:58 mowntan joined #gluster
04:02 nishanth joined #gluster
04:07 shubhendu joined #gluster
04:12 DV joined #gluster
04:29 Saravanakmr joined #gluster
04:29 burn joined #gluster
04:30 spalai joined #gluster
04:36 atalur joined #gluster
04:38 MugginsM joined #gluster
04:38 gem joined #gluster
04:39 burn joined #gluster
04:42 MugginsM joined #gluster
04:43 rafi joined #gluster
04:46 overclk joined #gluster
04:48 MugginsM joined #gluster
04:51 aspandey joined #gluster
04:51 MugginsM joined #gluster
04:52 MugginsM just discovered I've been running Gluster 3.7.8 for a month, with a massive performance problem
04:52 * MugginsM sighs
04:52 MugginsM I suck, should've had some metric to see that
04:52 JoeJulian You're not using my old metric?
04:52 JoeJulian @Joe's metric
04:52 JoeJulian @metric
04:52 glusterbot JoeJulian: I do not know about 'metric', but I do know about these similar topics: 'Joe's performance metric'
04:52 JoeJulian @Joe's performance metric
04:53 glusterbot JoeJulian: nobody complains.
04:53 MugginsM it was at the same time as an increase in load, so we expected a little bit of slow down
04:53 MugginsM need moar graphs
04:55 MugginsM aaaand 3.7.11 takes previous 40 minute job down to 7 minutes :-O
04:55 JoeJulian Woot!
04:56 MugginsM now to decide if to kill/restart the week long jobs :-/
04:59 MugginsM I'm a crappy sysadmin. took me over a month to spot that kind of degradation :-(
04:59 * MugginsM sniffs
05:08 karthik___ joined #gluster
05:09 raghug joined #gluster
05:12 gowtham joined #gluster
05:14 poornimag joined #gluster
05:15 mowntan joined #gluster
05:15 karnan joined #gluster
05:17 natarej joined #gluster
05:18 ndarshan joined #gluster
05:19 kotreshhr joined #gluster
05:19 Gnomethrower joined #gluster
05:20 davpostpunk anyone know as fix the transaction lock issues?
05:21 spalai left #gluster
05:22 harish_ joined #gluster
05:23 hgowtham joined #gluster
05:25 Apeksha joined #gluster
05:28 DV joined #gluster
05:31 rafi1 joined #gluster
05:32 JoeJulian I'm not aware of any transaction lock issues.
05:34 aravindavk joined #gluster
05:37 jiffin joined #gluster
05:42 hchiramm joined #gluster
05:47 Bhaskarakiran joined #gluster
05:50 rafi joined #gluster
05:53 mchangir_ joined #gluster
05:53 pur joined #gluster
05:55 atalur joined #gluster
05:55 kovshenin joined #gluster
05:58 atinm davpostpunk, what's the issue on the transaction?
05:59 Wizek joined #gluster
06:00 spalai joined #gluster
06:04 beeradb_ joined #gluster
06:08 Manikandan joined #gluster
06:16 MikeLupe joined #gluster
06:26 kshlm joined #gluster
06:26 jtux joined #gluster
06:27 skoduri joined #gluster
06:30 prasanth joined #gluster
06:30 kdhananjay joined #gluster
06:37 ashiq_ joined #gluster
06:42 DV joined #gluster
06:43 kshlm joined #gluster
06:46 Manikandan joined #gluster
06:49 jtux joined #gluster
06:52 [Enrico] joined #gluster
06:53 mchangir_ joined #gluster
06:57 raghug joined #gluster
06:58 atalur joined #gluster
07:02 nbalacha joined #gluster
07:03 atinm joined #gluster
07:06 skoduri joined #gluster
07:11 wnlx joined #gluster
07:14 hackman joined #gluster
07:19 rafi joined #gluster
07:19 np_ joined #gluster
07:19 ivan_rossi joined #gluster
07:19 kshlm joined #gluster
07:33 anil_ joined #gluster
07:34 Gnomethrower joined #gluster
07:36 fsimonce joined #gluster
07:45 mchangir_ joined #gluster
07:46 nbalacha joined #gluster
07:47 DV joined #gluster
07:49 atinm joined #gluster
08:02 RameshN joined #gluster
08:10 ramky joined #gluster
08:13 level7 joined #gluster
08:17 raghug joined #gluster
08:21 ketyosz joined #gluster
08:24 skoduri joined #gluster
08:25 harish_ joined #gluster
08:26 Slashman joined #gluster
08:43 rastar joined #gluster
08:47 atinm joined #gluster
08:50 ahino joined #gluster
08:52 mowntan joined #gluster
08:52 MikeLupe joined #gluster
08:53 deniszh joined #gluster
09:02 DV__ joined #gluster
09:08 aravindavk joined #gluster
09:16 Saravanakmr joined #gluster
09:22 post-factum what one should do if folder is in split-brain?
09:29 Drankis joined #gluster
09:33 arcolife joined #gluster
09:36 atinm joined #gluster
09:40 bfoster joined #gluster
09:46 atalur post-factum, were you able to resolve split-brain in directory?
09:46 post-factum atalur: nope, it is still in split-brain
09:46 atalur post-factum, http://gluster.readthedocs.io/en/latest/Troubleshooting/split-brain/
09:46 glusterbot Title: Split Brain - Gluster Docs (at gluster.readthedocs.io)
09:46 post-factum oh, stop
09:46 post-factum it is resolved now
09:46 post-factum hm
09:47 post-factum i've just removed broken files from it, and it is ok now
09:47 post-factum so nvm :)
09:47 atalur :)
09:48 post-factum i believe the issue was not in folder itself but in the list of files within the folder
09:48 ketyosz hello gluster community
09:48 ketyosz I have an 8 brick distributed replicated system
09:49 ketyosz every host has the gluster mounted on localhost
09:49 ketyosz 2 webservers, 2 app servers, 2 controllers, 2 proxies
09:50 ketyosz and I have a directory wit hmany small files which cann ot be deleted because it is not empty
09:50 ketyosz I tried rebalance and fix layout
09:50 ketyosz all completed with success
09:51 ketyosz I've checked split brain and peer status
09:51 ketyosz no split brain no peer connection problem
09:51 ketyosz it is a 3.7.11
09:51 sakshi_b ketyosz, can you check if there are any contents in the directory in the backend?
09:53 ketyosz I can see the content existing on teh bricks, but only the directories without the files
09:53 sakshi_b yes, this error is there in the 3.7.11 version
09:54 ketyosz actually it blocks a deployment
09:54 ketyosz is there any fix?
09:54 ketyosz downgrade or anything?
09:54 sakshi_b to fix it you can delete the remaining directory from the backend and then proceed with your rmdir operation from the mount
09:54 ketyosz I see
09:55 sakshi_b this error would have arised probably if you were doing parallel rmdir, i.e multiple clients of the same volume doing rmdir
09:55 ketyosz so I have to log to all backends access the folders offered for the bricks and delete the dirs manually?
09:55 paul98 joined #gluster
09:55 paul98 hi, is there any docs on setting up a windows client using isci interface?
09:55 paul98 all docs i see is for linux
09:56 sakshi_b ketyosz, if you find that troublesome, there is another way also
09:56 ketyosz really? listening...
09:56 sakshi_b ketyosz, you can do an ls <full_path_to_directory>
09:56 sakshi_b post this you will be able to see the entry from the mount and re-do you rmdir operation
09:57 ketyosz excuse me, but I do not understand this
09:58 sakshi_b ketyosz, from the mount point do an ls on the directory that you can see from the backend but not from the mount point
10:02 spalai left #gluster
10:03 spalai joined #gluster
10:03 kkeithley1 joined #gluster
10:04 Gnomethrower joined #gluster
10:10 DV__ joined #gluster
10:11 mowntan joined #gluster
10:17 nangthang joined #gluster
10:27 ketyosz for the records: the solution is/was to delete the problematic forder from all the backend  bricks
10:27 ketyosz many thx for sakshi_b
10:28 ketyosz left #gluster
10:31 EinstCrazy joined #gluster
10:34 dlambrig_ joined #gluster
10:43 DV joined #gluster
10:56 mchangir_ joined #gluster
11:00 paul98 exit
11:13 gem joined #gluster
11:17 mchangir_ joined #gluster
11:19 hgowtham joined #gluster
11:19 hgowtham joined #gluster
11:21 ppai joined #gluster
11:24 johnmilton joined #gluster
11:33 JesperA joined #gluster
11:35 Debloper joined #gluster
11:37 kkeithley_ Community Bug Triage meeting in ~20 minutes in #gluster-meeting
11:41 level7 joined #gluster
11:43 hgowtham joined #gluster
11:46 Bhaskarakiran joined #gluster
12:00 kkeithley_ Community Bug Triage meeting _now_, in #gluster-meeting
12:03 social joined #gluster
12:04 ramky joined #gluster
12:05 RameshN joined #gluster
12:06 aravindavk joined #gluster
12:09 andy-b joined #gluster
12:11 kotreshhr left #gluster
12:14 luizcpg joined #gluster
12:24 lanning joined #gluster
12:29 shaunm joined #gluster
12:37 ramky joined #gluster
12:39 nbalacha joined #gluster
12:40 unclemarc joined #gluster
12:45 btpier joined #gluster
12:48 hi11111 joined #gluster
12:49 DV joined #gluster
12:52 Slashman joined #gluster
12:55 mowntan joined #gluster
12:55 haomaiwang joined #gluster
12:57 EinstCrazy joined #gluster
12:58 aravindavk joined #gluster
12:58 julim joined #gluster
13:01 spalai left #gluster
13:01 haomaiwang joined #gluster
13:01 shubhendu_ joined #gluster
13:02 plarsen joined #gluster
13:04 partner can someone recall any bug that would be causing inode table to grow with version 3.6.6 on centos 7 ? since the day it was installed my graphs show a roughly 10k increase per week in here: /proc/sys/fs/inode-nr
13:07 mpietersen joined #gluster
13:08 nishanth joined #gluster
13:09 jermudgeon joined #gluster
13:09 mpietersen joined #gluster
13:11 partner maybe its something elsewhere, that number stands for number of inodes system has allocated, but, just thought as the installation triggered the situation thought to start asking from here. seems to be enough to have processes running, no volumes or clients or anything :o
13:16 ndarshan joined #gluster
13:29 skoduri_ joined #gluster
13:29 jiffin1 joined #gluster
13:40 skylar joined #gluster
13:41 atinm joined #gluster
13:43 jobewan joined #gluster
13:50 Guest75533 joined #gluster
13:58 nbalacha joined #gluster
14:01 haomaiwang joined #gluster
14:04 bennyturns joined #gluster
14:10 luizcpg joined #gluster
14:11 tom[] i'm running 3.4.2 on ubuntu and considering upgrading. i have a few questions. is this ppa a good choice https://launchpad.net/~semiosis/+archive/ubuntu/ppa ?
14:11 glusterbot tom[]: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
14:12 tom[] oh, glusterbot, you were always the best
14:12 julim joined #gluster
14:13 mchangir joined #gluster
14:19 tom[] reading about upgrading 3.4 -> 3.6, how much of this does the ubuntu package do?
14:31 tom[] my deployment is three hosts, each with a server and one brick, the servers providing replication. each also host has a client that mounts the mounts the fs from the local server. is this a candidate for a rolling upgrade (3.4 -> 3.6)?
14:32 skyrat joined #gluster
14:32 dresantos joined #gluster
14:36 skyrat hi, is there a script or tool for resolving split brains? I want to resolv directory entries split brain in replica 2 volume, saying: forget this directory on brick B entirely and use data from brick A to heal.
14:36 nehar joined #gluster
14:36 wnlx joined #gluster
14:43 Scotch joined #gluster
14:44 skyrat is there a tool for removing direcotry on a brick recursively with all necessary hardlinks?
14:47 Scotch have a weird issue w/gluster client 3.7.11...have 2 directory names that are "invisible" via client but normal via file system.  Rename dir and gluster client sees them.  Change back and it disappears again.
14:48 Scotch content stays and is accessible IF you know the path/dir name
14:48 overclk joined #gluster
14:48 ndevos skyrat: there is policy-based split-brain resolution, I guess that is what you want
14:50 dresantos Hi to all.
14:50 dresantos I am going nuts trying to config geo-replication :(
14:50 dresantos I have used the georepsetup to setup de geo rep
14:50 dresantos But the problems start when I try to start the replication...
14:50 dresantos [root@gluster-brick1 ~]# gluster volume geo-replication status
14:50 dresantos No active geo-replication sessions
14:50 dresantos [root@gluster-brick1 ~]#
14:50 dresantos [root@gluster-brick1 ~]# gluster volume geo-replication gv0 gluster-geo-rep::gvRep create
14:50 dresantos Session between gv0 and gluster-geo-rep::gvRep is already created.
14:50 Slashman joined #gluster
14:50 dresantos geo-replication command failed
14:50 dresantos [root@gluster-brick1 ~]#
14:50 dresantos [root@gluster-brick1 ~]# gluster volume geo-replication gv0 gluster-geo-rep::gvRep start force
14:50 dresantos Session between gv0 and gluster-geo-rep::gvRep has not been created. Please create session and retry.
14:50 dresantos geo-replication command failed
14:50 dresantos Can anyone help? I am really struggling to get this working
14:51 ndevos skyrat: I think itisravi explained about it in http://events.linuxfoundation.org/sites/events/files/slides/glusterfs-AFR-LinuxCon_EU-2015_0.pdf
14:52 level7 joined #gluster
14:53 post-factum also, @paste
14:53 post-factum @paste
14:53 glusterbot post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
14:53 jiffin joined #gluster
14:53 post-factum dresantos: ^^
14:54 wnlx joined #gluster
14:56 Scotch anyone else have directories go invisible to gluster client but still exist with content at the file system level?
14:56 ndevos Scotch: I think that happens when the directories are in a gfid split-brain
14:56 mchangir joined #gluster
14:56 dresantos @past
14:56 glusterbot dresantos: I do not know about 'past', but I do know about these similar topics: 'paste', 'pasteinfo', 'pastepeer', 'pastestatus', 'pastevolumestatus'
14:57 dresantos @paste
14:57 glusterbot dresantos: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
14:58 Scotch ndevos: yick...ok I'll investigate
14:58 Scotch ndevos: thx ;)
14:58 skyrat ndevos, thanks for the reply
15:01 haomaiwang joined #gluster
15:02 squizzi_ joined #gluster
15:03 dresantos joined #gluster
15:10 dresantos Hi to all.
15:10 dresantos I am going nuts trying to config geo-replication :(
15:10 dresantos I have used the georepsetup to setup de geo rep
15:10 dresantos But the problems start when I try to start the replication...
15:10 dresantos Can anyone help? I am realling strugling to get this working
15:11 dresantos Here are the output of the commands when i try to start the replication
15:11 dresantos http://termbin.com/bfyh
15:12 skyrat ndevos, the plolicy based is also not working, there is gfid split brain, the directory is broken entirely on one brick, the gfid is all zeros, gluster heal info split-brain says no files in split brain
15:14 ndevos skyrat: sorry, I dont know what the best way is to get that fixed, but maybe ,,(split-brain) helps you
15:14 glusterbot skyrat: To heal split-brains, see https://gluster.readthedocs.org/en/release-3.7.0/Features/heal-info-and-split-brain-resolution/ For additional information, see this older article https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/ Also see splitmount https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
15:14 JoeJulian skyrat: you can use splitmount for that.
15:15 skyrat I think I have to remove the brokne file (and the gfid hard link) - but the file is directory, what to do when it includes thousands of files? I will have to remove the contents also and clculate the gfid hard link names and remove them also, is there something to automatize this?
15:16 JoeJulian Did you not read what was just printed?
15:18 skyrat JoeJulian, I'm reading it
15:21 skyrat JoeJulian, thank you, but i wrote the similar script doing the same on my own already. The problem is still the directory gfid split brain
15:22 skyrat JoeJulian, I'm investigating the splitmount now
15:27 skyrat JoeJulian, will the splitmount work for the entire dirs?
15:27 Scotch ndevos: sorry, I'm only using a distributed volume (no replication, etc).  Am I missing something basic re: "gfid" split brain vs. others?
15:28 ndevos Scotch: on distributed volumes the directories are replicated
15:28 JoeJulian Scotch: The directories exist on every brick and they need to have the same gfid in their ,,(extended attributes)
15:28 glusterbot Scotch: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
15:30 Scotch got it, thanks
15:30 skyrat JoeJulian, like: remove the dir recursively on bad replica and then re-sync them from good replica?
15:31 wushudoin joined #gluster
15:31 JoeJulian skyrat: That's what you were asking to do, yes, and you can do that with splitmount.
15:32 JoeJulian It splits the replica into N distribute volumes and mounts them. This allows you to delete from the mountpoint and gluster will handle all the .glusterfs tree stuff.
15:33 JoeJulian *If*, however, it's *just* directories that are split-brain in a replica volume, I usually just remove the trusted.afr attributes from them. I've never encountered an occasion where those were accurate.
15:34 JoeJulian I filed a bug report for that once a long time ago. I think it got closed without being fixed though.
15:36 kbyrne joined #gluster
15:37 skyrat JoeJulian, perfect! just trying that, will talk to you later, thank you
15:41 dlambrig_ left #gluster
15:42 wnlx joined #gluster
15:47 Wizek joined #gluster
15:47 rafi1 joined #gluster
15:48 gem joined #gluster
15:53 skoduri joined #gluster
15:56 Scotch ndevos: (and JoeJulian) the "invisible" directories have the same respective trusted.gfid across all bricks (although on one brick, re: one of the two directories in question, the trusted.glusterfs.dht is all '0')
15:56 robb_nl joined #gluster
16:01 haomaiwang joined #gluster
16:06 JoeJulian huh... that's interesting. I guess I would do a rebalance fix-layout to solve the all 0 dht.
16:35 level7 joined #gluster
16:42 ashiq_ joined #gluster
17:01 haomaiwang joined #gluster
17:01 wushudoin joined #gluster
17:07 mowntan joined #gluster
17:08 mowntan joined #gluster
17:08 mowntan joined #gluster
17:13 shubhendu_ joined #gluster
17:13 mchangir joined #gluster
17:14 shubhendu_ joined #gluster
17:24 hagarth joined #gluster
17:58 B21956 joined #gluster
17:58 karnan joined #gluster
18:01 haomaiwang joined #gluster
18:15 bennyturns joined #gluster
18:23 gem joined #gluster
18:24 mowntan joined #gluster
18:25 squizzi_ joined #gluster
18:31 skyrat joined #gluster
18:31 kpease joined #gluster
18:41 hagarth joined #gluster
19:01 haomaiwang joined #gluster
19:17 jlockwood joined #gluster
19:18 jlockwood Hey folks, I'm reading that sharing the same volume via NFS and CIFS is not supported. Is this one of those things that you'll get no help with, or it simply causes problems that make it untenable..?
19:32 JoeJulian I've never heard any such thing.
19:32 JoeJulian Where did you read that?
19:33 JoeJulian jlockwood: ^^
19:58 JesperA joined #gluster
20:01 haomaiwang joined #gluster
20:34 jlockwood Hey JoeJulian got it off red hat page.
20:34 jlockwood https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/2.1_Update_2_Release_Notes/chap-Documentation-2.1_Update_2_Release_Notes-Known_Issues.html
20:34 glusterbot Title: Chapter 3. Known Issues (at access.redhat.com)
20:34 jlockwood ooooh, nice bot.
20:34 jlockwood :p
20:40 deniszh joined #gluster
20:42 JoeJulian Well bug 882769 was disputed by a lead gluster developer (at the time) and was only disputed by internal red hat emails (thanks for nothing Red Hat).
20:42 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=882769 medium, medium, ---, ira, CLOSED WONTFIX, Both NFS and CIFS are started automatically by default
20:42 JoeJulian It's also really old.
20:43 JoeJulian That said, oplocks have been a problem for me in the past and I'd always turned them off. ymmv.
20:50 delhage joined #gluster
20:58 shaunm joined #gluster
21:01 haomaiwang joined #gluster
21:03 mpietersen joined #gluster
21:11 johnmilton joined #gluster
21:14 beeradb__ joined #gluster
21:29 johnmilton joined #gluster
21:42 hackman joined #gluster
21:55 jad_jay_ joined #gluster
22:01 haomaiwang joined #gluster
22:26 d0nn1e joined #gluster
22:43 m0zes joined #gluster
23:01 mtanner joined #gluster
23:01 haomaiwang joined #gluster
23:04 m0zes joined #gluster
23:14 plarsen joined #gluster
23:27 luizcpg joined #gluster
23:43 luizcpg joined #gluster
23:53 s-hell joined #gluster
23:58 Gnomethrower joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary