Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:04 plarsen joined #gluster
00:06 Wizek_ joined #gluster
00:08 jvandewege_ joined #gluster
00:13 hagarth joined #gluster
00:27 gbox phibs: yes centos7 here too.  It shouldn't be a problem, just keep an eye on it.  bye
00:49 jvandewege_ joined #gluster
01:01 haomaiwa_ joined #gluster
01:06 haomaiw__ joined #gluster
01:07 EinstCrazy joined #gluster
01:11 camg joined #gluster
01:17 vmallika joined #gluster
02:01 phibs thanks gb
02:01 haomaiwa_ joined #gluster
02:05 luizcpg joined #gluster
02:08 7YUAANPYY joined #gluster
02:29 pdrakeweb joined #gluster
02:29 baojg joined #gluster
02:30 pdrakeweb joined #gluster
02:42 Neilo joined #gluster
02:45 Neilo hey guys, we have a 16 brick, 8 node system, that we ran rebalance on over the weekend. We now have lots of duplicate files with -rw-rw---T+ in the ls -l file listing... find /bricks/brick1/ -type f -size 0 -perm 1000 -exec rm -v {} \; is the current plan
02:45 glusterbot Neilo: -rw-rw-'s karma is now -2
02:46 baojg joined #gluster
02:46 Neilo any experience with this? Is it a layout issue? We are running 3.6.3 on centos 6.6
02:52 haomaiwa_ joined #gluster
02:54 klfwip joined #gluster
02:54 toppy joined #gluster
02:58 EinstCra_ joined #gluster
03:01 haomaiwa_ joined #gluster
03:06 ovaistariq joined #gluster
03:09 harish joined #gluster
03:13 DV joined #gluster
03:22 camg joined #gluster
03:23 RameshN joined #gluster
03:25 vmallika joined #gluster
03:29 overclk joined #gluster
03:30 vmallika joined #gluster
03:37 ramteid joined #gluster
03:38 atinm joined #gluster
03:47 nishanth joined #gluster
03:51 shubhendu joined #gluster
03:55 atinm joined #gluster
04:01 64MAAPCQ3 joined #gluster
04:03 itisravi joined #gluster
04:06 nbalacha joined #gluster
04:09 nehar joined #gluster
04:11 Manikandan joined #gluster
04:20 DV joined #gluster
04:26 suliba joined #gluster
04:38 camg joined #gluster
04:40 camg joined #gluster
04:44 jhyland joined #gluster
04:44 scoban joined #gluster
04:53 jiffin joined #gluster
04:56 ashiq joined #gluster
04:59 DV joined #gluster
05:01 haomaiwa_ joined #gluster
05:02 sakshi joined #gluster
05:05 prasanth joined #gluster
05:08 ndarshan joined #gluster
05:17 gowtham joined #gluster
05:17 Bhaskarakiran joined #gluster
05:20 rafi joined #gluster
05:21 aravindavk joined #gluster
05:23 karthik___ joined #gluster
05:25 baojg joined #gluster
05:26 jhyland joined #gluster
05:27 JoeJulian ~brick order | phibs
05:27 glusterbot phibs: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
05:28 JoeJulian @lucky what is this .glusterfs directory
05:28 glusterbot JoeJulian: https://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
05:28 JoeJulian Neilo: Start with reading that article. ^
05:28 Apeksha joined #gluster
05:30 vmallika joined #gluster
05:32 phibs JoeJulian: danke
05:39 scobanx joined #gluster
05:39 scobanx Anyone online?
05:41 hackman joined #gluster
05:43 jiffin scobanx: i'm here :)
05:45 phibs left #gluster
05:45 Lee1092 joined #gluster
05:47 Neilo Thanks JoeJulian, yes the bricks are spread with a replica of 2 :) I was reading this earlier -> https://www.gluster.org/pipermail/glu​ster-users/2014-December/019983.html
05:47 glusterbot Title: [Gluster-users] Hundreds of duplicate files (at www.gluster.org)
05:49 Neilo we have just ran: gluster volume rebalance glusvol fix-layout start
05:49 scobanx Jiifin: hi :)
05:49 ashiq_ joined #gluster
05:50 scobanx Jiffin: Can you tell me the details of disperse heal? Any documentation you provide would be useful.
05:50 skoduri joined #gluster
05:54 jiffin scobanx: I am not the right person for that question, but u can check xavih, pranithk, aspandey for the same
05:55 [Enrico] joined #gluster
06:00 hgowtham joined #gluster
06:01 scobanx jiffin: ok, will they appear here today?
06:01 haomaiwa_ joined #gluster
06:07 jiffin scobanx: i am not sure
06:08 Saravanakmr joined #gluster
06:08 jiffin scobanx: you use gluster ML, if u want to increase the probability of catching them
06:09 scobanx jiffin: ok, can you at least tell them to send an email to gluster-users about the details of disperse heal if you see them
06:09 scobanx jiffin: I already send mails there, no answer till now..
06:10 ashiq_ joined #gluster
06:11 jiffin scobanx: oh
06:14 nishanth joined #gluster
06:14 spalai joined #gluster
06:16 mhulsman joined #gluster
06:19 Neilo would this log entry, at the end of glusvol-rebalance.log, indicate a failed rebalance? What would be the steps to correct it?
06:19 Neilo [2016-03-29 05:24:04.735730] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (15), shutting down
06:19 glusterbot Neilo: ('s karma is now -127
06:20 Neilo the fix-layout completed already, FYI
06:20 kanagaraj joined #gluster
06:21 [Enrico] joined #gluster
06:26 unforgiven512 joined #gluster
06:27 kdhananjay joined #gluster
06:32 spalai joined #gluster
06:34 karnan joined #gluster
06:46 poornimag joined #gluster
06:49 rwheeler joined #gluster
06:50 Neilo ah, that was the fix-layout rebalance... the earlier rebalance had the same line, with this above it: [2016-03-28 19:04:15.241671] I [MSGID: 109028] [dht-rebalance.c:2115:gf_defrag_status_get] 0-glusterfs: Files migrated: 162542, size: 57172640468302, lookups: 1010190, failu res: 0, skipped: 0
06:52 JoeJulian signum 15 suggests that the process completed and was terminated.
06:53 pur joined #gluster
06:58 ramky joined #gluster
07:01 haomaiwa_ joined #gluster
07:01 jiffin scobanx: i can see pranithk online on gluster-dev, post ur query there
07:05 harish_ joined #gluster
07:06 aspandey joined #gluster
07:06 deniszh joined #gluster
07:13 anil_ joined #gluster
07:15 ctria joined #gluster
07:20 kshlm joined #gluster
07:21 pgreg joined #gluster
07:22 scobanx join #gluster-dev
07:23 jri joined #gluster
07:25 arcolife joined #gluster
07:27 pgreg joined #gluster
07:28 harish_ joined #gluster
07:30 arcolife joined #gluster
07:34 fsimonce joined #gluster
07:47 robb_nl joined #gluster
07:49 rouven joined #gluster
07:49 [diablo] joined #gluster
07:53 haomaiwang joined #gluster
07:56 mbukatov joined #gluster
07:56 jww Hello.I'm installing a four nodes gluster 3.7 setup, but I get poor performance. when testing with dd for example  , I have only 42MB/s . On the old 2 nodes gluster ( version 3.2 ) I get 72MB/s . here is my gluster volume info and dd log : http://pastebin.ca/3415785
07:57 glusterbot Title: pastebin - Something - post number 3415785 (at pastebin.ca)
08:01 haomaiwa_ joined #gluster
08:04 post-factum jww: what gluster version do you use?
08:04 jww I use 3.7 on the new server , and 3.2 on the old.
08:04 post-factum jww: please be more specific
08:05 Norky joined #gluster
08:06 jww on the new server it's gluster 3.7.8 , and on the old it's 3.3.2
08:06 post-factum please upgrade to 3.7.9 first
08:06 jww okay I do it now.
08:11 jvava joined #gluster
08:15 rastar joined #gluster
08:17 Slashman joined #gluster
08:21 farblue joined #gluster
08:24 jiffin joined #gluster
08:26 farblue joined #gluster
08:27 harish_ joined #gluster
08:27 tom[] joined #gluster
08:27 arcolife joined #gluster
08:28 lanning joined #gluster
08:28 jww post-factum: I upgraded to 3.7.9, but performances are slightly worse :^
08:28 jbrooks joined #gluster
08:31 farblue hi all :) On thursday evening my gluster cluster suddenly seemed to lock up and services trying to access files via the fuse mounts fell over when the servers ran out of file handles. All the fuse clients still seemed to be working but none of the server daemons were responding. I couldn’t see anything in the logs but knew one of the servers (which runs a mixed workload and is not dedicated to gluster) was doing some heavy processing (although not
08:31 farblue related to anything gluster) and so I restarted the gluster server on that machine and all the servers seemed to ‘unlock’. Anyone got any idea what could have caused such an issue?
08:32 farblue joined #gluster
08:32 Wizek joined #gluster
08:33 Wizek_ joined #gluster
08:34 tom[] joined #gluster
08:34 jbrooks joined #gluster
08:36 post-factum jww: have you updated both server and client, and restarted/remounted everything?
08:37 jww it's debian, so services have been restarted, but I'll redo the mount manually now.
08:41 jww after remounting I get better write performance, ~58MB
08:41 jww ~58MB/sec
08:42 post-factum jww: you could try to play with following options: http://termbin.com/baoz
08:43 post-factum jww: i would pay special attention to write-behind and flush-behind in your case
08:43 post-factum (those options are from one of my volumes)
08:44 jww thanks ! I try those.
08:51 kotreshhr joined #gluster
08:56 scobanx post-factum: what is the difference between client.event-threads and performance.client-io-threads
08:56 Norky joined #gluster
08:57 post-factum scobanx: afaik, event-threads is related to socket i/o multiplexing with epoll(), while client-io-threads is related to threads that actually process i/o
08:58 scobanx post-factum, which thread do the disperse calculations on client?
08:59 post-factum scobanx: that should be disperse translator, but i have no idea whether it is multithreaded
09:00 scobanx ok thanks.
09:01 haomaiwa_ joined #gluster
09:01 auzty joined #gluster
09:02 Wizek_ joined #gluster
09:06 toppy joined #gluster
09:08 robb_nl joined #gluster
09:09 kshlm joined #gluster
09:12 sagarhani joined #gluster
09:17 scobanx When I restart nodes, brick processes not started by glusterd every time. I must do a force volume start. Anything I can do about it?
09:18 Bhaskarakiran joined #gluster
09:18 post-factum scobanx: check you logs first
09:19 gowtham joined #gluster
09:21 jiffin joined #gluster
09:22 nishanth joined #gluster
09:22 ggarg joined #gluster
09:23 jww can somebody tell me with read/write speed I should exepect with a 4 node gluster storage(replicated and distribued on 4 bricks ) on 1GB network ?
09:27 pur joined #gluster
09:28 Romeor jww i assume write is around 50 MB/s and read 120MB/sec
09:31 jww umm it's not very fast.
09:31 Romeor its up to the theoretical maximum at 1gbps
09:31 Romeor write is slower due to writing to multiple hosts
09:32 ashiq_ joined #gluster
09:32 Romeor so write is about at half of 1gbps and read at 1gbps
09:33 Romeor the bottleneck is switch
09:36 tom[] joined #gluster
09:37 cholcombe joined #gluster
09:38 hackman joined #gluster
09:40 jww I see, I have 58MB in write with gluster fuse client, and 120MB with NFS.
09:40 bluenemo joined #gluster
09:42 baojg joined #gluster
09:45 baojg joined #gluster
09:53 R0ok_ joined #gluster
09:59 karnan joined #gluster
10:01 haomaiwa_ joined #gluster
10:05 farblue anyone have any idea what can cause a gluster setup to just lock up?
10:08 farblue jww: how are you measuring the speeds, out of curiosity?
10:10 harish_ joined #gluster
10:12 jww farblue: I use a mix of dd/fio/smf iozone, I had the info from  http://www.gluster.org/community/docum​entation/index.php/Performance_Testing
10:13 jww farblue: about the lock up, maybe the log show something.
10:14 farblue jww: I couldn’t see anything out of the ordinary :( Tbh, I didn’t spend much time on it because when the server cluster locked up, all the fuse clients started blocking io requests and then all the services making those requests fell over with errors stating there were no more file handles available
10:15 farblue restarting one of the server instances seemed to sort it. Maybe it was luck i picked the right one.
10:15 MrAbaddon joined #gluster
10:16 farblue but it surprised me that 1 issue on 1 of the servers could impact all the other servers and then cascade to all the clients like it did
10:18 Romeor to me it seems like networking issue
10:27 farblue Romeor: possibly, although Consul didn’t have any issues so there was no problem routing between the servers
10:42 Romeor JoeJulian, ?
10:46 shubhendu joined #gluster
10:46 luizcpg joined #gluster
10:47 nishanth joined #gluster
10:52 RameshN joined #gluster
11:00 post-factum is that possible to join two clusters? lets say i have 3 nodes in one cluster and 2 nodes in second cluster, each has its own set of volumes with unique names. could i probe one cluster from another and get clusters merged?
11:01 overclk joined #gluster
11:01 haomaiwa_ joined #gluster
11:02 jiffin post-factum: No
11:07 [diablo] joined #gluster
11:16 Debloper joined #gluster
11:23 johnmilton joined #gluster
11:28 R0ok_ joined #gluster
11:32 gem joined #gluster
11:33 arcolife joined #gluster
11:35 fattaneh1 joined #gluster
11:38 mbukatov joined #gluster
11:41 vmallika joined #gluster
11:45 luizcpg joined #gluster
11:50 bfoster joined #gluster
11:52 Manikandan REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 10 minutes) in #gluster-meeting
11:53 RameshN joined #gluster
11:53 nishanth joined #gluster
11:57 shaunm joined #gluster
11:59 ira joined #gluster
12:01 haomaiwang joined #gluster
12:06 kanagaraj joined #gluster
12:08 kanagaraj joined #gluster
12:09 shubhendu joined #gluster
12:12 14WAALSLY joined #gluster
12:13 14WAALSLY left #gluster
12:25 vmallika joined #gluster
12:26 unclemarc joined #gluster
12:27 EinstCrazy joined #gluster
12:30 Saravanakmr joined #gluster
12:36 archit_ joined #gluster
12:36 baojg joined #gluster
12:44 chirino_m joined #gluster
12:46 Neilo joined #gluster
12:46 vmallika joined #gluster
12:47 nishanth joined #gluster
12:49 atinm joined #gluster
12:51 nehar joined #gluster
12:52 bwerthmann joined #gluster
12:54 julim joined #gluster
13:00 Neilo JoeJulian: Thanks for the article. I understand the .glusterfs directory a little more now. I'm missing something here though, the trusted.glusterfs.dht.linkto= variable is on some other files. http://www.fpaste.org/346945/45925631/
13:00 glusterbot Title: #346945 Fedora Project Pastebin (at www.fpaste.org)
13:01 haomaiwa_ joined #gluster
13:03 Neilo but not on this one, for example. Is there a way to update or recreate the metadata?
13:05 spalai left #gluster
13:06 EinstCrazy joined #gluster
13:07 natarej joined #gluster
13:17 fattaneh joined #gluster
13:17 fattaneh left #gluster
13:19 rafi joined #gluster
13:20 ramky joined #gluster
13:31 arcolife joined #gluster
13:36 chromatin Does anyone have any ideas how to troubleshoot a single-node (test before adding other nodes) glusterFS where the underlying disk can be read from 1-2 GByte/sec but the GlusterFS FUSE mount tops out 200-300 MB/sec ?
13:39 Manikandan joined #gluster
13:44 nbalacha joined #gluster
13:48 scoban joined #gluster
14:00 plarsen joined #gluster
14:00 ramky joined #gluster
14:02 marbu joined #gluster
14:03 toppy joined #gluster
14:07 spalai joined #gluster
14:18 fattaneh1 joined #gluster
14:25 ndevos chromatin: not sure, but it could be related to the way you run the test
14:26 ndevos chromatin: for example, make sure to drop the caches before running each test
14:28 ndevos also do not compare reading from the block-device, with reading from the fuse-mounted filesystem, run the exact same test and compare that
14:28 chromatin ndevos: Cache is not coming in to play for sure
14:28 ndevos fuse will go through the (local) network, sometimes that is a bottleneck too
14:28 chromatin ndevos: Not clear what you mean by the second comment. The test is a straight dd (and yes someone else said “don’t use dd” but it matches precisely my workflow which is to read sequentially a gigantic file start to finish)
14:29 ndevos chromatin: running dd is file, as long as that matches your workload :)
14:29 hamiller joined #gluster
14:30 chromatin Anyway, I am at a loss as to why the massive differential in performance. CPU is not even pegging during the read form glusterfs , so it’s not that as far as I can tell
14:30 ndevos chromatin: but run it on a file, do not compare "dd if=/dev/disk" with "dd if=/mnt/file.img" - yes, people tend to do that
14:31 ndevos chromatin: fuse causes a lot of context switches, for each read that you do, try with a higher value for bs=.. with dd
14:31 chromatin ndevos: Yes, have read in large files. By increasing BS to 1M I was able to get read speed up to 300 MB/sec, (otehrwise was stable around 200MB/sec) but no further gains in performance
14:32 ndevos chromatin: you also may want to try glusterfs-coreutils, it does not use fuse but libgfapi and can perform much better
14:33 Manikandan joined #gluster
14:33 ndevos packaged in the CentOS Storage SIG repository, or get it from https://github.com/gluster/glusterfs-coreutils
14:33 glusterbot Title: GitHub - gluster/glusterfs-coreutils: Tools that work directly on Gluster volume, inspired by the standard coreutils. (at github.com)
14:33 chromatin ndevos: I will give it a try. Certainly for our workload if we can’t get performance up (I noticed it when using my standard tools adn only confirmed it using dd) this will be a non-starter . Thanks for your advice.
14:34 kpease joined #gluster
14:35 ndevos chromatin: you might get higher throughput if you run multiple dd commands at the same time, simulate the number of clients that would access teh volume
14:36 ndevos chromatin: if you run multiple clients, mounting the volume several times and distributing dd processes over those mountpoints should improve things too
14:37 chromatin ndevos: Will give it a try
14:40 kpease joined #gluster
14:43 haomaiwa_ joined #gluster
14:43 kotreshhr left #gluster
14:44 ggarg joined #gluster
14:45 fattaneh joined #gluster
14:49 MrAbaddon joined #gluster
14:52 [Enrico] joined #gluster
15:01 haomaiwa_ joined #gluster
15:03 nathwill joined #gluster
15:11 Saravanakmr joined #gluster
15:13 skoduri joined #gluster
15:16 deniszh joined #gluster
15:24 wushudoin joined #gluster
15:27 fattaneh left #gluster
15:28 DV joined #gluster
15:32 vmallika joined #gluster
15:35 deniszh1 joined #gluster
15:41 g3kk0 joined #gluster
15:41 g3kk0 anyone know how the change the settings for the built-in NFS server in Gluster?
15:42 g3kk0 I'm having some strange issues with Solr and file locking using NFS
15:47 kshlm joined #gluster
15:55 camg joined #gluster
15:56 camg joined #gluster
16:01 haomaiwa_ joined #gluster
16:07 jwd joined #gluster
16:08 d0nn1e joined #gluster
16:12 Wojtek joined #gluster
16:14 g3kk0 anyone know if the built in Gluster NFS server can be v4 instead of v3?
16:14 klfwip joined #gluster
16:17 robb_nl joined #gluster
16:22 hagarth g3kk0: no, built in NFS server is v3 only. Ganesha can do both v3 and v4 with gluster
16:22 g3kk0 ok thanks
16:24 rouven joined #gluster
16:24 ovaistariq joined #gluster
16:34 haomaiwang joined #gluster
16:36 scoban joined #gluster
16:38 haomaiwa_ joined #gluster
16:38 RameshN joined #gluster
16:42 scoban sometimes when I enter gluster v status ommand it hangs and timeouts. But I think it continue in the background. How to check if some command is running in background and kill it completely?
16:43 misc using ps
16:43 scoban ps shows gluster v status?
16:43 scoban I thinks glusterd process handle the commands
16:44 misc well, you might see the thread
16:46 shubhendu joined #gluster
16:48 scoban restarting glusterd process affects volumes and mounts?
16:59 RameshN joined #gluster
17:02 Gugge_ joined #gluster
17:03 carnil_ joined #gluster
17:05 post-factum scoban: no
17:06 jocke- joined #gluster
17:06 JesperA- joined #gluster
17:06 JonathanS joined #gluster
17:06 martinet1 joined #gluster
17:06 wnlx_ joined #gluster
17:07 worzieznc joined #gluster
17:07 saltsa_ joined #gluster
17:07 mlhess- joined #gluster
17:07 squaly joined #gluster
17:07 cyberbootje joined #gluster
17:07 shyam joined #gluster
17:07 sankarshan_away joined #gluster
17:07 syadnom joined #gluster
17:07 cholcombe joined #gluster
17:07 crashmag_ joined #gluster
17:07 ggarg joined #gluster
17:07 al joined #gluster
17:08 hagarth joined #gluster
17:08 tswartz joined #gluster
17:08 troj joined #gluster
17:08 baoboa joined #gluster
17:08 armyriad joined #gluster
17:08 d-fence joined #gluster
17:08 DV joined #gluster
17:08 jotun joined #gluster
17:09 yawkat joined #gluster
17:16 sagarhani joined #gluster
17:20 Pintomatic joined #gluster
17:21 fattaneh1 joined #gluster
17:23 fyxim_ joined #gluster
17:23 wnlx joined #gluster
17:23 vmallika joined #gluster
17:43 nishanth joined #gluster
18:02 MrAbaddon joined #gluster
18:09 RameshN joined #gluster
18:11 jiffin joined #gluster
18:12 ovaistariq joined #gluster
18:39 kanagaraj joined #gluster
18:40 Hesulan joined #gluster
18:46 jiffin joined #gluster
18:54 jiffin joined #gluster
19:13 deniszh joined #gluster
19:26 tertiary joined #gluster
19:29 tertiary i have a two node gluster share, suddenly (maybe because of a reboot?) the shares are no longer synced. one has files that the other does not. i tried healing but it's stating "Self-heal-daemon is disabled. Heal will not be triggered on volume x"
19:42 nathwill joined #gluster
19:42 mowntan joined #gluster
20:00 ovaistariq joined #gluster
20:01 post-factum tertiary: let us know more info: peers status, volume info etc
20:06 tertiary volume info and peer status all check out. everything is connected and happy from there
20:25 deniszh joined #gluster
20:26 DV joined #gluster
20:31 luizcpg joined #gluster
20:36 tswartz joined #gluster
20:44 gbox joined #gluster
20:46 gbox Hi I keep encountering something troubling.  I have a gluster volume (2x2 dist/repl) mounted on /mnt/gluster/gv0 with a subdirectory bind mounted elsewhere (mount --bind /mnt/gluster/gv0/data /data).
20:47 jwaibel joined #gluster
20:48 gbox The brick logs on other peers keep complaining about the bind mount path being inaccessible: http://ur1.ca/opglx
20:48 glusterbot Title: #347191 Fedora Project Pastebin (at ur1.ca)
20:48 gbox Does anyone know how gluster might be interpreting the bind mount path?
20:50 mhulsman joined #gluster
20:51 hackman joined #gluster
20:53 gbox Files that trigger the errors have a permissions listing of ---------T in the volume itself (fuse mount).  Has anyone seen this?
20:53 glusterbot gbox: -------'s karma is now -7
20:58 hijakk1 joined #gluster
21:03 gbox OK no the blank permissions are on the brick path
21:07 David_H_Smith joined #gluster
21:13 reddrgon joined #gluster
21:24 ira joined #gluster
21:29 reddrgon left #gluster
21:29 tertiary joined #gluster
21:34 tertiary so with a stripe volume on 2 nodes, a file added to the volume should be visible on both nodes correct?
21:44 ndevos tertiary: you really do not want to use ,,(stripe), it is deprecated
21:44 glusterbot tertiary: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
21:45 ndevos tertiary: if you have bug files and want to split them in smaller pieces, use sharding instead: http://blog.gluster.org/2015/12​/introducing-shard-translator/
21:47 ndevos gbox: if bind mounts do not work in recent versions, you should probably file a bug
21:47 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
21:48 * ndevos waves good night _o/
21:48 ovaistariq joined #gluster
21:57 gbox ndevos: It's possible it's not a bug.  Bind mounts introduce indirection in the filesystem.  How would gluster handle that?  Or more specifically why would gluster see the bind mount point as the path?
21:58 gbox ndevos: Did you see the log I posted (http://ur1.ca/opglx)?  Would you consider it a bug?
21:58 glusterbot Title: #347191 Fedora Project Pastebin (at ur1.ca)
22:08 robb_nl joined #gluster
22:24 caitnop joined #gluster
23:04 jlockwood joined #gluster
23:05 jockek joined #gluster
23:09 dlambrig_ joined #gluster
23:11 rwheeler joined #gluster
23:18 DV joined #gluster
23:19 Neilo joined #gluster
23:20 Neilo [o__o]: seen JoeJulian?
23:20 [o__o] Yes, I saw JoeJulian 16 hours ago.
23:20 [o__o] JoeJulian said: "signum 15 suggests that the process completed and was terminated."
23:24 plarsen joined #gluster
23:59 rouven hey everybody. i have a replica 2 volume exported via nfs with glusterfs 3.7.8-2 on centos 7. i have one client running the same glusterfs version tlike the peers that can mount the nfs volume from one peer but not from the other ("mount.nfs: Remote I/O error"). is there something i misunderstood or is there a clear path where to do further debugging in this case? nfs.log doens't even show the connection attempt. the client can ping both pee
23:59 rouven rs using their fqdn. firewalld is disabled in both cases and selinux is set to permissive

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary