Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-12-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 EinstCrazy joined #gluster
00:16 lpabon joined #gluster
00:26 vmallika joined #gluster
00:40 hgichon joined #gluster
00:41 zhangjn joined #gluster
00:53 EinstCrazy joined #gluster
01:25 lezo joined #gluster
01:27 frankS2 joined #gluster
01:40 jbrooks joined #gluster
01:48 amye joined #gluster
01:52 RedW joined #gluster
01:54 nangthang joined #gluster
02:10 mlncn joined #gluster
02:12 Lee1092 joined #gluster
02:24 n0b0dyh3r3 joined #gluster
02:40 DV joined #gluster
02:53 amye joined #gluster
03:00 auzty joined #gluster
03:01 nishanth joined #gluster
03:23 sc0 joined #gluster
03:24 harish joined #gluster
03:24 plarsen joined #gluster
03:35 vmallika joined #gluster
03:38 atinm joined #gluster
03:42 Peppard joined #gluster
03:45 zhangjn joined #gluster
03:48 calavera joined #gluster
03:53 nbalacha joined #gluster
03:56 hagarth joined #gluster
03:57 Manikandan joined #gluster
03:59 amye joined #gluster
04:00 itisravi joined #gluster
04:02 nangthang joined #gluster
04:05 RameshN joined #gluster
04:08 pranithk joined #gluster
04:08 shubhendu joined #gluster
04:09 pranithk JoeJulian: regarding https://bugzilla.redhat.co​m/show_bug.cgi?id=1291479, nothing seems out of ordinary, I don't see any other way than running the workload to find the leaks. Wondering if there is a simpler way of finding the leak?
04:09 glusterbot Bug 1291479: high, unspecified, ---, ndevos, ASSIGNED , Memory leak on Fuse client
04:10 JoeJulian Argh... just when I'm getting closer to putting this into production again. I really don't need a memory leak... Let me find our original interaction.
04:13 JoeJulian fuse client consuming 16G on 3.7.6 and pushing him in to swap. Sounds like a leak.
04:13 JoeJulian https://botbot.me/freenode/gluster/​2015-12-14/?msg=56135925&page=9
04:13 glusterbot Title: IRC Logs for #gluster | BotBot.me [o__o] (at botbot.me)
04:13 JoeJulian pranithk: ^
04:17 kanagaraj joined #gluster
04:23 Manikandan joined #gluster
04:23 ramteid joined #gluster
04:26 pranithk JoeJulian: Thanks for this
04:31 zhangjn joined #gluster
04:34 kshlm joined #gluster
04:35 jiffin joined #gluster
04:40 DV joined #gluster
04:44 kotreshhr joined #gluster
04:44 hgowtham joined #gluster
04:45 ppai joined #gluster
04:48 dannyb joined #gluster
04:50 poornimag joined #gluster
04:51 vmallika joined #gluster
04:56 nbalacha joined #gluster
05:02 pppp joined #gluster
05:03 Apeksha joined #gluster
05:05 hgowtham joined #gluster
05:08 Bhaskarakiran joined #gluster
05:08 dusmant joined #gluster
05:15 nehar joined #gluster
05:29 zhangjn joined #gluster
05:29 overclk joined #gluster
05:32 overclk_ joined #gluster
05:34 kshlm joined #gluster
05:34 daMaestro joined #gluster
05:37 ndarshan joined #gluster
05:41 kotreshhr joined #gluster
05:42 atinm joined #gluster
05:43 dusmant joined #gluster
05:47 aravindavk joined #gluster
05:47 ashiq_ joined #gluster
05:49 rafi joined #gluster
05:50 atalur joined #gluster
05:51 skoduri joined #gluster
05:53 kdhananjay joined #gluster
06:03 atinm joined #gluster
06:04 spalai joined #gluster
06:09 ramky joined #gluster
06:10 kshlm joined #gluster
06:16 alghost joined #gluster
06:17 zeittunnel joined #gluster
06:19 amye joined #gluster
06:21 adamaN joined #gluster
06:22 DV__ joined #gluster
06:27 haomaiwa_ joined #gluster
06:28 nishanth joined #gluster
06:30 kshlm joined #gluster
06:34 kotreshhr joined #gluster
06:35 aravindavk joined #gluster
06:38 skoduri joined #gluster
06:38 7GHABR4QK joined #gluster
06:38 amye joined #gluster
06:40 atinm joined #gluster
07:00 overclk_ joined #gluster
07:01 haomaiwa_ joined #gluster
07:06 SOLDIERz joined #gluster
07:10 deepakcs joined #gluster
07:23 suliba joined #gluster
07:25 mhulsman joined #gluster
07:28 jtux joined #gluster
07:28 uebera|| joined #gluster
07:28 uebera|| joined #gluster
07:29 haomaiwa_ joined #gluster
07:36 suliba joined #gluster
07:51 DV__ joined #gluster
07:55 mbukatov joined #gluster
07:56 Akee joined #gluster
08:01 haomaiwa_ joined #gluster
08:04 suliba joined #gluster
08:11 tessier Is big Maildir storage on gluster still a bad idea or have there been improvements? Just how inadvisable is it? Because it's very tempting. :)
08:18 DV__ joined #gluster
08:19 mobaer joined #gluster
08:20 amye joined #gluster
08:24 [Enrico] joined #gluster
08:29 fsimonce joined #gluster
08:32 aravindavk joined #gluster
08:45 nbalacha joined #gluster
08:45 skoduri joined #gluster
08:46 atinm joined #gluster
08:47 ndevos tessier: some users seem to do that, but I would not expect high performance with it, that might be acceptible though
08:49 jtux joined #gluster
08:50 kotreshhr joined #gluster
08:52 anil joined #gluster
08:52 dusmant joined #gluster
08:54 aravindavk joined #gluster
08:55 arcolife joined #gluster
08:56 sc0 joined #gluster
09:01 haomaiwa_ joined #gluster
09:08 Apeksha joined #gluster
09:27 sc0 joined #gluster
09:29 Saravana_ joined #gluster
09:39 sc0 joined #gluster
09:41 MACscr joined #gluster
09:41 ctria joined #gluster
09:45 Saravanakmr joined #gluster
09:47 haomaiwa_ joined #gluster
09:48 aravindavk joined #gluster
09:57 kotreshhr1 joined #gluster
09:58 ahino joined #gluster
10:00 p8952 joined #gluster
10:01 haomaiwang joined #gluster
10:03 firemanxbr joined #gluster
10:06 haomai___ joined #gluster
10:21 kshlm joined #gluster
10:21 itisravi joined #gluster
10:21 zhangjn joined #gluster
10:25 lanning joined #gluster
10:26 kaarebs joined #gluster
10:26 klaxa joined #gluster
10:27 kaarebs Hai .. We are using a gluster setup running on 22 servers at digital ocean .. after we updated to x > 20 servers our gluster client sometimes fails and the output on our command line is the following
10:27 kaarebs on ls -l > d?????????  ? ?    ?        ?            ? distributed_storage/
10:28 haomaiwa_ joined #gluster
10:28 kaarebs everyting works after umount and a new mount
10:31 zhangjn joined #gluster
10:33 haomaiwang joined #gluster
10:48 kaarebs Is it possible to set some option that will automatically remount on error? - I cant find anything.
10:52 kotreshhr joined #gluster
10:53 ndevos kaarebs: it mostly is best to update clients before updating the servers, have you updated the clients already?
10:53 _shaps_ joined #gluster
10:54 ndevos kaarebs: also, what version were you running, and to what version are you updating?
10:58 ctria joined #gluster
10:58 aravindavk joined #gluster
10:58 nbalacha joined #gluster
11:01 mobaer1 joined #gluster
11:01 haomaiwa_ joined #gluster
11:04 kkeithley1 joined #gluster
11:06 harish_ joined #gluster
11:09 atinm joined #gluster
11:09 dusmant joined #gluster
11:11 EinstCrazy joined #gluster
11:11 zhangjn joined #gluster
11:11 badone joined #gluster
11:12 suliba joined #gluster
11:19 zeittunnel joined #gluster
11:42 firemanxbr joined #gluster
11:50 bash1235123 joined #gluster
11:51 bash1235123 # gluster volume status Another transaction is in progress. Please try again after sometime.
11:52 bash1235123 anybody can help with that ?
11:52 bash1235123 seems that only restarting helps
11:56 ramky joined #gluster
12:01 haomaiwa_ joined #gluster
12:05 julim joined #gluster
12:06 sc0 joined #gluster
12:09 zhangjn joined #gluster
12:09 kshlm joined #gluster
12:11 zhangjn joined #gluster
12:11 ashiq joined #gluster
12:16 kdhananjay joined #gluster
12:20 mbukatov joined #gluster
12:25 ira joined #gluster
12:30 sc0 joined #gluster
12:35 spalai left #gluster
12:38 kaarebs ndevos: What do you mean? - should the versions always be the same - client and server?
12:40 mbukatov joined #gluster
12:45 Bhaskarakiran joined #gluster
12:49 kaarebs ndevos: I am using glusterfs 3.6.7
12:57 neca joined #gluster
12:57 mlncn joined #gluster
12:59 neca Hello. My glfsheal-data.log log is showing "Using Program GlusterFS 3.3", though I only have 3.7.6 packages installed. How comes?
13:00 zeittunnel joined #gluster
13:05 lpabon joined #gluster
13:21 poornimag joined #gluster
13:24 unclemarc joined #gluster
13:26 Humble joined #gluster
13:28 harish_ joined #gluster
13:29 d0nn1e joined #gluster
13:32 zhangjn joined #gluster
13:38 sc0 joined #gluster
13:41 hchiramm joined #gluster
13:47 sc0_ joined #gluster
13:48 mobaer joined #gluster
13:59 B21956 joined #gluster
14:01 RameshN joined #gluster
14:11 haomaiwa_ joined #gluster
14:15 ivan_rossi joined #gluster
14:17 shaunm joined #gluster
14:30 nbalacha joined #gluster
14:43 Manikandan joined #gluster
14:43 Humble joined #gluster
14:46 hchiramm joined #gluster
14:46 bash1235123 "Another transaction is in progress. Please try again after sometime. "
14:46 bash1235123 anybody can help ?
14:47 MessedUpHare joined #gluster
14:47 chirino joined #gluster
14:52 plarsen joined #gluster
14:52 skylar joined #gluster
14:54 hamiller joined #gluster
14:56 kotreshhr left #gluster
14:56 MessedUpHare joined #gluster
15:01 theron joined #gluster
15:01 haomaiwa_ joined #gluster
15:03 Sjors Hi
15:03 glusterbot Sjors: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:03 Sjors yeah yeah I'm going to :D
15:04 Sjors I have a 4-peer Gluster cluster with 2 bricks that I just found out didn't sync for a while
15:04 Sjors after rebooting one of the peers with a brick on it
15:05 Sjors (it's Replicate, 1 x 2 = 2)
15:05 Sjors it's currently in self-heal with 19082 entries healed and 74106 heal failed entries
15:05 Sjors I'm getting I/O errors on various files
15:06 Sjors now, "heal sync info heal-failed" is in the command usage but gives "command not supported"
15:06 Sjors "statistics heal-count" gives 210k entries yet to heal
15:07 Sjors I want to inspect the splitbrains, but "info split-brain" hangs forever
15:08 hagarth joined #gluster
15:08 Sjors anyone knows what's going on with "info split-brain" hanging?
15:12 mlncn joined #gluster
15:12 mbukatov joined #gluster
15:14 prg3 joined #gluster
15:23 skoduri joined #gluster
15:29 nage joined #gluster
15:32 the-me joined #gluster
15:48 arcolife joined #gluster
15:54 dusmant joined #gluster
16:04 shubhendu joined #gluster
16:05 klaxa joined #gluster
16:07 ju5t joined #gluster
16:08 ju5t hi, we've just added two bricks to an existing volume, but we're not seeing an increase in available disk space
16:08 ju5t we have two 1TB bricks and two 500GB bricks in this volume
16:09 ju5t do we have to start the rebalance process so the clients will be made aware of the expansion?
16:13 JoeJulian ju5t: No. Make sure your clients are actually connecting to your new bricks (netstat and/or check the client logs).
16:13 JoeJulian ju5t: You rebalance (at least fix-layout) so you can actually use the new bricks though.
16:16 ju5t JoeJulian: it was a rookie mistake on our end, the firewall on the clients was blocking the new bricks
16:21 prg3 left #gluster
16:30 ju5t joined #gluster
16:41 itisravi joined #gluster
16:47 bennyturns joined #gluster
16:48 jiffin joined #gluster
16:53 fsimonce joined #gluster
16:55 kotreshhr joined #gluster
17:03 kotreshhr1 joined #gluster
17:13 firemanxbr joined #gluster
17:19 calavera joined #gluster
17:21 calavera joined #gluster
17:28 jwd joined #gluster
17:44 matclayton joined #gluster
17:44 matclayton Any idea when 3.7.7 is likely to be released?
17:47 jiffin matclayton: within 2-3 weeks
17:47 jiffin time
17:49 kotreshhr joined #gluster
17:49 matclayton ah ok I thought it was this week, waiting on a bug fix in it, before turning on a new cluster
17:54 ivan_rossi left #gluster
18:06 nishanth joined #gluster
18:16 ivan_rossi joined #gluster
18:25 Rapture joined #gluster
18:30 dgandhi joined #gluster
18:31 dgandhi joined #gluster
18:32 dgandhi joined #gluster
18:36 kotreshhr left #gluster
18:41 mlncn joined #gluster
18:51 ctria joined #gluster
18:53 kanagaraj joined #gluster
19:02 calavera joined #gluster
19:04 ahino joined #gluster
19:07 calavera joined #gluster
19:09 matclayton joined #gluster
19:13 cliluw joined #gluster
19:14 squizzi_ joined #gluster
19:35 kaarebs joined #gluster
19:51 theron joined #gluster
19:55 lord4163 joined #gluster
20:03 unclemarc joined #gluster
20:09 matclayton joined #gluster
20:12 jkroon joined #gluster
20:15 jiffin joined #gluster
20:18 Mattlantis joined #gluster
20:19 mattb joined #gluster
20:23 unclemarc joined #gluster
20:24 jkroon https://www.gluster.org/pipermail/gl​uster-users/2015-August/023212.html
20:24 glusterbot Title: [Gluster-users] After Centos 6 yum update client fails to mount glusterfs volume (at www.gluster.org)
20:24 jkroon should that info perhaps form part of the upgrade guide?
20:32 calavera joined #gluster
20:32 jkroon i've also not yet managed a 3.7 installation without those options ...
20:48 deniszh joined #gluster
20:56 mrEriksson joined #gluster
21:15 PatNarciso Happy Thursday all.
21:16 JoeJulian Sure Happy It's Thursday.
21:22 PatNarciso Realizing I formated an underlying 44TB (36TB used) brick with the wrong -i size; what is the recommended option for transferring the data to a new brick -- which I have formated with the recommended -i size.
21:22 PatNarciso Ideally, I'd like to keep everything on the same gluster volume. (we have scripts that leverage the gluster guid’s, and I’d *prefer* to keep that intact).  Would rsync with xattrs do the trick?  Or is this just a bad idea, and I should start with a new gluster volume.  Or, geosync to the new gluster volume?
21:22 JoeJulian is it replicated?
21:22 matclayton joined #gluster
21:23 PatNarciso No.
21:23 JoeJulian replace-brick...start and wait.
21:23 JoeJulian rsync is not recommended
21:24 JoeJulian but.... performance tests showed no noticeable difference between the default inode sizes and larger.
21:39 PatNarciso it was my understanding dirs/volumes with lots of small files, may benefit from a larger size.
21:42 dgbaley joined #gluster
21:42 PatNarciso hmm; is there an app or xfs-utility that prints out file count, dir count, avg file size?   I want to make sure I'm communicating my use-case well.
21:43 JoeJulian Not that I know of. I'm just relaying what I heard from someone at Red Hat who did the performance testing.
21:44 PatNarciso I'm reaching deep into my memory, but I thought I recalled a redhat pdf/slideshare where xfs performance was tested and improved with the '-i 512' flag.
21:45 matclayton joined #gluster
21:46 * PatNarciso does a little digging -- I'll find it.
21:47 PatNarciso JoeJulian: are you aware of any companies leveraging gluster in a video post-production environment?  (ie: where adobe premiere leverages a gluster volume)?
21:59 PatNarciso following up on the xfs -i option: https://access.redhat.com/documentation​/en-US/Red_Hat_Storage/3.1/html/Adminis​tration_Guide/Brick_Configuration.html
21:59 glusterbot Title: 11.2. Brick Configuration (at access.redhat.com)
21:59 PatNarciso section 4. Logical Block Size for the Directory
22:00 PatNarciso An XFS file system allows to select a logical block size for the file system directory that is greater than the logical block size of the file system. Increasing the logical block size for the directories from the default 4 K, decreases the directory I/O, which in turn improves the performance of directory operations. To set the block size, you need to use -n size option with the mkfs.xfs
22:02 PatNarciso doh.  -i, -n... argh; one letter changes everything.
22:05 PatNarciso anyways... my new brick has the updated -n value; where the current brick has the default -n value.  and I'm expecting there to be a performance improvement after moving everything to this new brick.
22:08 skylar joined #gluster
22:14 JoeJulian Red Hat did have a presentation where they said that because of the size of the metadata, and inode size of 512 would give better performance. Then their performance team tested after that and said it didn't matter, that there was no measurable difference.
22:14 PatNarciso *head on desk*
22:14 JoeJulian But I think their documentation sticks with the "better safe than sorry" metric.
22:16 PatNarciso well, the good news is that there appears to be no reason to perform the brick transfer/replace now.
22:21 bowhunter joined #gluster
22:36 JoeJulian I thought you'd like not having to do that. :D
22:41 semajnz joined #gluster
22:42 ctria joined #gluster
22:42 jrm16020 joined #gluster
22:43 skylar joined #gluster
22:46 gildub joined #gluster
23:02 ahino joined #gluster
23:03 zhangjn joined #gluster
23:06 ctria joined #gluster
23:08 suliba joined #gluster
23:23 plarsen joined #gluster
23:23 matclayton left #gluster
23:40 ctria joined #gluster
23:53 ira joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary