Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 ShaunR joined #gluster
00:01 ShaunR I'm confused about somthing, i just installed gluster on 3 nodes using http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
00:01 ShaunR as far as i can tell there was no auth setup
00:01 ShaunR is gluster just going to let any client/server connect with no auth details?
00:04 mjrosenb ShaunR: I beleive this is the case (not 100% sure though)
00:11 vovcia ShaunR: You can limit access to certain IPs
00:25 ShaunR I just cant imagine it defaults to open like that... i mean, i guess i should start scanning the net for open gluster servers!?
00:26 jeek Why not? Everybody else is. :)
00:27 jeek nmap -p111,24007-24012 0.0.0.0/0
00:36 mjrosenb ShaunR: I would imagine that most gluster users are limited to a single subnet with no access to the outside.
01:16 harish_ joined #gluster
01:25 vovcia mjrosenb: or maybe there is some IPv6 world with full connectivity outside
01:26 vovcia jeek: check zmap instead of nmap, much better for scanning IPv4 space ;]]]
01:38 jeek vovcia: Any suggestions for automatically generating a directed graph of my switch/computer layout?
01:39 jeek Keeping in mind that I'm administrating a couple of /18s. ;)
01:39 vovcia jeek: photo camera
01:40 vovcia couple of /18 might not fit :PP
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:06 halfinhalfout joined #gluster
02:30 nangthang joined #gluster
02:35 aaronott joined #gluster
03:13 bharata-rao joined #gluster
03:14 Pupeno joined #gluster
03:19 _ndevos joined #gluster
03:19 _ndevos joined #gluster
03:33 gildub joined #gluster
03:39 shubhendu_ joined #gluster
03:47 TheSeven joined #gluster
03:49 itisravi joined #gluster
04:03 nbalacha joined #gluster
04:12 nishanth joined #gluster
04:13 RameshN joined #gluster
04:14 meghanam joined #gluster
04:14 kanagaraj joined #gluster
04:31 ashiq joined #gluster
04:34 sakshi joined #gluster
04:35 schandra joined #gluster
04:39 rafi joined #gluster
04:41 deepakcs joined #gluster
04:45 DV joined #gluster
04:48 spandit joined #gluster
04:58 ndarshan joined #gluster
05:01 Manikandan_ joined #gluster
05:01 Manikandan joined #gluster
05:01 gem_ joined #gluster
05:01 pppp joined #gluster
05:09 jiffin joined #gluster
05:10 glusterbot News from newglusterbugs: [Bug 1220996] Running `gluster volume heal testvol info` on a volume that is not started results in a core. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220996>
05:14 nixpanic_ joined #gluster
05:15 Apeksha joined #gluster
05:15 nixpanic_ joined #gluster
05:16 csim_ joined #gluster
05:16 jobewan_ joined #gluster
05:16 osiekhan4 joined #gluster
05:16 Bhaskarakiran joined #gluster
05:17 trig_ joined #gluster
05:19 eljrax_ joined #gluster
05:22 dusmant joined #gluster
05:26 ndk_ joined #gluster
05:27 rshade98 joined #gluster
05:27 ir2ivps5 joined #gluster
05:27 anil joined #gluster
05:31 SOLDIERz joined #gluster
05:31 lyang0 joined #gluster
05:33 harish_ joined #gluster
05:34 pppp joined #gluster
05:38 ppai joined #gluster
05:40 glusterbot News from newglusterbugs: [Bug 1220999] nfs-ganesha: when selinux is on volume export fails <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220999>
05:56 ppai joined #gluster
06:04 nsoffer joined #gluster
06:12 ppai joined #gluster
06:14 ccha joined #gluster
06:15 karnan joined #gluster
06:20 JoeJulian papamoose1: glusterbot needs to stop linking to which page?
06:20 fattaneh joined #gluster
06:21 JoeJulian ShaunR: During the quick start you created a "trusted pool" once this pool has been established, only servers within the pool can add other servers.
06:22 sripathi joined #gluster
06:22 nsoffer joined #gluster
06:22 ccha joined #gluster
06:23 JoeJulian ShaunR: You *can* also configure ssl.
06:24 asmarre_ joined #gluster
06:27 anrao joined #gluster
06:30 jtux joined #gluster
06:30 harish_ joined #gluster
06:31 rgustafs joined #gluster
06:40 nangthang joined #gluster
06:41 Philambdo joined #gluster
06:42 fattaneh1 joined #gluster
06:43 atalur joined #gluster
06:44 atalur_ joined #gluster
06:46 bharata_ joined #gluster
06:57 anrao joined #gluster
07:05 poornimag joined #gluster
07:05 hagarth joined #gluster
07:05 spalai joined #gluster
07:08 kdhananjay joined #gluster
07:08 kkeithley1 joined #gluster
07:10 deniszh joined #gluster
07:14 soumya joined #gluster
07:16 spalai left #gluster
07:16 spalai joined #gluster
07:17 LebedevRI joined #gluster
07:19 dusmant joined #gluster
07:29 fattaneh1 left #gluster
07:41 glusterbot News from newglusterbugs: [Bug 1221025] Glusterd crashes after enabling quota limit on a distrep volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221025>
07:52 aravindavk joined #gluster
07:52 dusmant joined #gluster
07:55 Slashman joined #gluster
08:00 maveric_amitc_ joined #gluster
08:00 Prilly joined #gluster
08:02 Anjana joined #gluster
08:11 glusterbot News from newglusterbugs: [Bug 1221032] Directories are missing post tier attach <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221032>
08:13 Norky joined #gluster
08:17 al joined #gluster
08:18 hagarth @channelstats
08:18 glusterbot hagarth: On #gluster there have been 413766 messages, containing 15616564 characters, 2559156 words, 9060 smileys, and 1281 frowns; 1818 of those messages were ACTIONs.  There have been 193107 joins, 4751 parts, 188735 quits, 29 kicks, 2571 mode changes, and 8 topic changes.  There are currently 254 users and the channel has peaked at 276 users.
08:27 fattaneh joined #gluster
08:31 lanning joined #gluster
08:41 glusterbot News from newglusterbugs: [Bug 1221045] glusterfs-extra-xlators package is NOT being pulled in when installing the glusterfs-server <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221045>
08:43 schandra joined #gluster
08:47 dusmant joined #gluster
08:48 fattaneh3 joined #gluster
08:49 shubhendu_ joined #gluster
08:49 ndarshan joined #gluster
08:59 Prilly joined #gluster
09:06 kshlm joined #gluster
09:06 harish_ joined #gluster
09:11 glusterbot News from newglusterbugs: [Bug 1221061] Detaching tier start failed on dist-rep volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221061>
09:13 gem joined #gluster
09:14 gem joined #gluster
09:15 gem joined #gluster
09:16 shubhendu_ joined #gluster
09:18 ndarshan joined #gluster
09:22 spalai joined #gluster
09:23 kaushal_ joined #gluster
09:23 kaushal_ joined #gluster
09:31 fattaneh3 left #gluster
09:34 dusmant joined #gluster
09:41 glusterbot News from newglusterbugs: [Bug 1217311] Disperse volume: gluster volume status doesn't show shd status <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217311>
09:41 glusterbot News from newglusterbugs: [Bug 1221095] Fix build warnings reported in Koji <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221095>
09:41 glusterbot News from newglusterbugs: [Bug 1221099] gNFSd does not work correctly/consistently with FSCache/CacheFilesd <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221099>
09:45 hchiramm_ joined #gluster
09:46 rjoseph joined #gluster
09:50 Anjana joined #gluster
09:51 gem joined #gluster
09:55 kaushal_ joined #gluster
09:55 kaushal_ joined #gluster
09:57 dusmant joined #gluster
10:00 maveric_amitc_ joined #gluster
10:02 Manikandan joined #gluster
10:02 Manikandan_ joined #gluster
10:11 glusterbot News from newglusterbugs: [Bug 1221100] Disperse volume: Directory became stale while renaming files in it. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221100>
10:19 getup joined #gluster
10:36 ilbot3 joined #gluster
10:36 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
10:38 LebedevRI joined #gluster
10:41 gem joined #gluster
10:41 glusterbot News from newglusterbugs: [Bug 1207735] Disperse volume: Huge memory leak of glusterfsd process <https://bugzilla.redhat.co​m/show_bug.cgi?id=1207735>
10:45 kaushal_ joined #gluster
10:45 kaushal_ joined #gluster
10:48 ccha joined #gluster
10:54 ira joined #gluster
11:05 [Enrico] joined #gluster
11:05 gildub joined #gluster
11:11 glusterbot News from newglusterbugs: [Bug 1221128] `gluster volume heal <vol-name> split-brain' tries to heal even with insufficient arguments <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221128>
11:14 dusmant joined #gluster
11:23 kaushal_ joined #gluster
11:23 kaushal_ joined #gluster
11:24 morse joined #gluster
11:25 spalai left #gluster
11:29 kaushal_ joined #gluster
11:29 kaushal_ joined #gluster
11:31 ccha joined #gluster
11:33 rjoseph joined #gluster
11:38 JoeJulian http://projects.bitergia.com/red​hat-glusterfs-dashboard/browser/
11:38 poornimag joined #gluster
11:39 atinmu joined #gluster
11:41 anoopcs JoeJulian, Is that the latest one?
11:42 glusterbot News from newglusterbugs: [Bug 1214994] Disperse volume: Rebalance failed when plain disperse volume is converted to distributed disperse volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214994>
11:42 glusterbot News from newglusterbugs: [Bug 1177167] ctdb's ping_pong lock tester fails with input/output error on disperse volume mounted with glusterfs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1177167>
11:42 Prilly joined #gluster
11:42 ndevos anoopcs: yes, that url contains updated statistics
11:42 ndevos whereas http://bitergia.com/projects/red​hat-glusterfs-dashboard/browser/ does not seem to be current
11:43 DV joined #gluster
11:44 anoopcs ndevos, yeah.. I think I referred the wrong one earlier today
11:44 nsoffer joined #gluster
11:56 pdrakeweb joined #gluster
11:58 JoeJulian anoopcs: yeah, apparently it must have changed at some point.
11:58 anoopcs JoeJulian, Ok. Just need to confirm
11:59 soumya joined #gluster
12:01 lpabon_ joined #gluster
12:08 ctria joined #gluster
12:13 itisravi_ joined #gluster
12:14 jeek joined #gluster
12:15 kaushal_ joined #gluster
12:16 ppai joined #gluster
12:19 lpabon joined #gluster
12:22 kovshenin joined #gluster
12:27 getup joined #gluster
12:27 nishanth joined #gluster
12:33 chirino joined #gluster
12:39 halfinhalfout joined #gluster
12:42 glusterbot News from newglusterbugs: [Bug 1221175] [geo-rep]: Session goes to faulty with "Cannot allocate memory" traceback when deletes were performed having trash translators ON <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221175>
12:42 Manikandan_ joined #gluster
12:44 plarsen joined #gluster
12:52 RameshN joined #gluster
12:53 julim joined #gluster
12:55 scuttle|afk joined #gluster
12:56 owlbot joined #gluster
12:59 DV joined #gluster
13:00 wushudoin joined #gluster
13:14 dusmant joined #gluster
13:15 RameshN joined #gluster
13:25 theron joined #gluster
13:25 theron joined #gluster
13:29 georgeh-LT2 joined #gluster
13:32 Apeksha joined #gluster
13:35 klaxa|work joined #gluster
13:37 atinmu joined #gluster
13:39 dgandhi joined #gluster
13:40 hamiller joined #gluster
13:54 aaronott joined #gluster
13:58 bturner joined #gluster
14:00 sas_ joined #gluster
14:03 Twistedgrim joined #gluster
14:04 kaushal_ joined #gluster
14:04 kaushal_ joined #gluster
14:05 kshlm joined #gluster
14:13 coredump joined #gluster
14:15 haomaiwa_ joined #gluster
14:18 monotek joined #gluster
14:26 yoavz joined #gluster
14:27 meghanam joined #gluster
14:37 deepakcs joined #gluster
14:47 nbalacha joined #gluster
14:51 neofob joined #gluster
15:30 joseki joined #gluster
15:30 joseki can cluster.stripe-block-size be changed on an existing volume?
15:34 cholcombe joined #gluster
15:43 dbruhn joined #gluster
15:44 ju5t joined #gluster
15:55 jiffin joined #gluster
16:07 rafi joined #gluster
16:07 jiffin1 joined #gluster
16:09 gem joined #gluster
16:22 plarsen joined #gluster
16:22 alexcrow joined #gluster
16:24 sysadmin-di2e joined #gluster
16:25 lexi2 joined #gluster
16:27 sysadmin-di2e I'm having a lot of latency issues between my brick pairs.  I'm getting a high number of misses for the rpcsvc_request_t.  Any input would be great.
16:27 rafi1 joined #gluster
16:27 jiffin joined #gluster
16:30 bturner joined #gluster
16:38 jiffin joined #gluster
16:42 kumar joined #gluster
16:45 bturner_ joined #gluster
16:46 kumar joined #gluster
17:01 CyrilPeponnet Hey guys ! I try to understand why lstat sys calls can take like several seconds on my gNFS from time to time. I don't have this delay when using a kNFS. Any clue ?
17:06 Rapture joined #gluster
17:23 jmarley joined #gluster
17:53 Tester3118 joined #gluster
17:54 Tester3118 left #gluster
17:56 rafaelcapucho joined #gluster
17:57 haomaiwang joined #gluster
17:58 bturner_ CyrilPeponnet, gNFS clients are not aware where files live like glusterfs clients are so requests go something like client -> gNFS server its mounting -> gNFS server with the file -> gNFS server its mounting -> client
17:58 rafaelcapucho Hello, I'm using gluster to mirror sites files between two VPS.. I have the gluster-storage and the client mount dir in each server, but the files are duplicated (in server and client), there're a way to mount or to write the server (in the same machine of server) without duplicate the files?
17:59 bturner_ kernel NFS will only have that 1 hop, glusterfs will have at least 1 hop, and gNFS will have at least 1 but much higher degree of possability there will be more
18:00 bturner_ rafaelcapucho, what is a VPS?
18:02 rafaelcapucho bturner: I guess that is Virtual Private Server, is a virtual machine using Xen...,
18:02 nsoffer joined #gluster
18:02 jiffin joined #gluster
18:07 Prilly joined #gluster
18:09 DV__ joined #gluster
18:11 kanagaraj joined #gluster
18:32 David_H_Smith joined #gluster
18:37 CyrilPeponnet bturner_ I see, is increasing the io-thread and cache could improve this ? (profile is lot of small files on a 3 replicat)
18:38 CyrilPeponnet Some time it's fast sometime it's slow (I suspect a cache timeout)
18:38 bturner_ cyberbootje, you can try tuning the current version but the real improvements for smallfiles are coming out in 3.7:
18:38 bturner_ * jiffin has quit (Quit: jiffin)
18:38 bturner_ http://www.gluster.org/community/documentati​on/index.php/Features/Feature_Smallfile_Perf
18:39 bturner_ specifically lookup unhashed and MT epol
18:39 bturner_ cyberbootje, also with 3.7 we are going to be moving to NFS ganesha for our NFS server
18:39 bturner_ oops CyrilPeponnet ^^
18:40 CyrilPeponnet I see
18:40 bturner_ CyrilPeponnet, things are probably cached on the fast runs?
18:40 CyrilPeponnet I guess so
18:40 CyrilPeponnet first access could take tine
18:40 bturner_ CyrilPeponnet, easy way to tell is:
18:41 bturner_ time stat blah
18:41 bturner_ time stat blah (cached)
18:41 bturner_ echo 3 > /proc/sys/vm/drop_caches
18:41 bturner_ time stat blah
18:41 bturner_ CyrilPeponnet, That will show you cached vs unchacved
18:42 CyrilPeponnet let me try this
18:43 bturner_ CyrilPeponnet, be sure to drop caches on both client and servers, the main caching will happen client side, but the server will still hav e it in RAM so it will be faster than everything uncahced
18:43 CyrilPeponnet I don't want to destroy my production servers :p
18:44 CyrilPeponnet so first access took 9s, second 0.8, I dropped the cache client side, next access are still fast
18:45 CyrilPeponnet but if I run like 10 access consecutively let say 9 of them will took like 0.01s et one 9s
18:45 CyrilPeponnet So not sure it's related to cache
18:46 CyrilPeponnet Using tcpdump I can see than when it's slow nfs request take time
18:47 bturner_ CyrilPeponnet, if dorp caches down't clear it its prolly not cached in page cache
18:47 bturner_ NFS prolly has it it its own cache
18:50 rafaelcapucho It I mount with gluterfs-client within the same machine as gluterfs-server, it will reuse the same space alocated by storing the files on server dir OR will replicate the files on client folder?
18:52 bturner_ rafaelcapucho, I guess I dont understand your question
18:53 bturner_ gluster replicates the files sync, it writes to all the bricks at the same time
18:53 bturner_ so writes go client -> (brick1, brick2)
18:53 bturner_ or for rep 3:
18:53 bturner_ so writes go client -> (brick1, brick2,brick3)
18:54 bturner_ it doesn't ,matter if one of the servers mount itself, the file will go to all of the bricks in the replica pair / group
19:17 capri joined #gluster
19:53 tdasilva joined #gluster
19:55 uebera|| joined #gluster
19:55 redbeard joined #gluster
20:17 tdasilva joined #gluster
20:19 tdasilva joined #gluster
20:21 plarsen joined #gluster
20:23 msmith_ joined #gluster
20:26 msmith_ joined #gluster
20:36 kovshenin hi! I'm running gluster on a high-latency network, and wondering if there's an option for asynchronous writes to help speed things up a little
20:40 papamoose1 check the options here: 'gluster volume set help | less'
20:46 kovshenin I did check those, and write-behind seems to be what I'm looking for, but I don't think it's doing what I want it to be doing :)
20:46 gildub joined #gluster
20:46 kovshenin in theory I want to be able to write to local disk and return immediately, and have gluster write to a remote volume in the background
20:46 CyrilPeponnet geoi-replication could be a solution
20:47 papamoose1 2nd geo replication.
20:47 kovshenin do you mean run another node closer to where I'm trying to fetch it at?
20:48 CyrilPeponnet http://www.jamescoyle.net/how-to/103​7-synchronise-a-glusterfs-volume-to-​a-remote-site-using-geo-replication
20:51 kovshenin yeah, i don't have enough disk to replicate the volume locally :(
20:51 jfdoucet joined #gluster
20:51 kovshenin the main reason I'm using gluster is to get a "bigger disk" on a 10G VPS
20:51 CyrilPeponnet oh you are in distributed mode with hight latency between your nodes ?
20:51 kovshenin yes, and I have a single node
20:52 CyrilPeponnet `in theory I want to be able to write to local disk and return immediately, and have gluster write to a remote volume in the background` what do you mean ?
20:52 kovshenin well kind of like async with NFS, but I'm not entirely sure I know what async does
20:58 CyrilPeponnet bturner_ I made some test on a separate 2 nodes (1 vol replica 2). with the vol mounted using nfs on one of the node. First access take lot of time, and then it's fast. I drop the cache of both node, stil fast.
20:59 CyrilPeponnet I try to add some options but I didn't see any improvements
21:00 CyrilPeponnet only read access
21:01 bturner_ CyrilPeponnet, yeah with glusterfs mounts I know http://www.gluster.org/community/documentation/​index.php/Translators/performance/stat-prefetch handles this but I am not sure how NFS does
21:02 CyrilPeponnet interresting I already tries  cluster.readdir-optimize
21:02 CyrilPeponnet how can I use this "translator"
21:02 bturner_ CyrilPeponnet, its loaded by default iirc
21:03 bturner_ CyrilPeponnet, any gluster option can be set with gluster volume set <my vol> <setting> <value>
21:04 bturner_ maybe not any, but most :)
21:04 CyrilPeponnet i know :)
21:04 CyrilPeponnet performance.stat-prefetch: on is already set :/
21:04 sysadmin-di2e1 joined #gluster
21:04 aaronott1 joined #gluster
21:04 bturner_ so if the setting causes a translarot to get loaded it will initiate a graph change either client or server side.  or both
21:05 bturner_ uyeah it should be by default
21:05 marbu joined #gluster
21:05 JustinCl1ft joined #gluster
21:05 bturner_ CyrilPeponnet, whats youe nedgame just need faster metadata operations?
21:05 bturner_ better smallfile perf?
21:05 m0zes_ joined #gluster
21:05 bturner_ endgame
21:06 bturner_ not nedgame :)
21:06 CyrilPeponnet well let's say we have /vol/bin dir full of binaries (small).
21:06 tessier_ joined #gluster
21:06 CyrilPeponnet if I add this path to my PATH
21:06 CyrilPeponnet using the shell sometime hang
21:07 CyrilPeponnet because it takes time to lstat files in this dir.
21:07 bturner_ CyrilPeponnet, be sure to remove --color from ls, some systems alias it
21:07 xrsanet_ joined #gluster
21:07 CyrilPeponnet sure I already done that :p
21:07 suliba joined #gluster
21:07 CyrilPeponnet event a source /vol/bla
21:07 CyrilPeponnet with a 2 line file
21:07 CyrilPeponnet sometime takes like 10s
21:07 CyrilPeponnet (in general is <1s)
21:08 CyrilPeponnet I try to narrow dowm the issue
21:09 bitpushr_ joined #gluster
21:09 CyrilPeponnet (same issue when using tab to autocomplete commands or navigate throu
21:09 CyrilPeponnet dir
21:09 hamiller joined #gluster
21:09 marbu joined #gluster
21:09 JustinCl1ft joined #gluster
21:10 harish__ joined #gluster
21:10 gothos joined #gluster
21:10 CyrilPeponnet according to IO graph done using a tcpdump capture, when it's slow, NFS requests takes time in a network point of view
21:11 ndk joined #gluster
21:11 mattmcc joined #gluster
21:11 ChrisHolcombe joined #gluster
21:11 bturner_ CyrilPeponnet, you can setup profiling on the volume and see what you can glean from that.  In my experience gluster is pretty good at metadata operations vs other distributed systems, but with any distributed system metadata operation will take longer than local FS
21:11 TheSeven joined #gluster
21:12 papamoose joined #gluster
21:12 CyrilPeponnet sure, I'm just afraid to enable profiling on a setup with more than 1k clients
21:12 CyrilPeponnet :p
21:12 bturner_ CyrilPeponnet, also, try the 3.7 beta bits, there are smallfile perf enhancements there
21:13 CyrilPeponnet last time I did gluster vol status nfs inode, I had to restart the vol as all client nfs connexion starts to stale
21:13 bturner_ CyrilPeponnet, yeah it _shouldn't_ be bad but I don't know all the caveats, it may be best to ask one of the devs
21:13 CyrilPeponnet (I'm still using 3.5.2 for now)
21:13 CyrilPeponnet I will take a look at the profiling
21:13 bitpushr joined #gluster
21:14 bturner_ CyrilPeponnet, also maybe an strace from the client side when you repro
21:14 bturner_ strace a slow one vs a fast one
21:14 CyrilPeponnet I already done that
21:14 CyrilPeponnet :p
21:14 bturner_ oh can you pastbin?
21:14 CyrilPeponnet sure let me retrieve it
21:17 bturner_ CyrilPeponnet, another thing is you could be hitting a hot thread server side, have a look at top -H and see if you see any threads pegging out a CPU at 100%
21:18 CyrilPeponnet For example here it hangs: 4.906240 lstat("/usr/img/current-238", {st_mode=S_IFLNK|0777, st_size=42, ...}) = 0
21:18 CyrilPeponnet 4.9s to do a stat on this folder
21:18 CyrilPeponnet (I have plenty of others)
21:18 CyrilPeponnet why this one ? no clue, it's never the same
21:19 CyrilPeponnet next time 1.092109 lgetxattr("/usr/img/0.0",
21:19 surge joined #gluster
21:19 CyrilPeponnet so basically open / read /lstat / lgetxattr somehow take time
21:20 CyrilPeponnet from time to time
21:20 bturner_ CyrilPeponnet, hmm, this is worth an email to gluster-users.  You sent one out yet?
21:20 CyrilPeponnet Long time ago but no responses
21:21 Guest71514 hello, got a question that has been bugging me for a while. I have a volume, a mount point for the volume and the brick data directory. Can I use the brick data directory for reading only? For example backup?
21:21 bturner_ CyrilPeponnet, shoot one out with bturner@redhat.com in CC, I'll see if I can poke up some interest.  Most everyone is at the gluster dev summit this week but I'll see what I can do
21:21 CyrilPeponnet sure as long as you don't write anyting
21:22 CyrilPeponnet bturner_ thanks I will do that
21:22 CyrilPeponnet I will continue to do some investigation
21:22 bturner_ np!  I'll see what I cando.  gotta run ttyls!
21:22 CyrilPeponnet thanks for your time :)
21:23 Guest71514 Is it a sane idea that access the brick directory for read-only will free up the gluster daemon?
21:23 Guest71514 Is it a sane idea that accessING the brick directory for read-only will free up the gluster daemon?
21:23 CyrilPeponnet sure but you will stress your disks
21:24 CyrilPeponnet not that it works for replicated volumes
21:24 Guest71514 it doesn't work for replicated volumes?
21:24 Guest71514 why?
21:24 CyrilPeponnet only for replicat
21:24 CyrilPeponnet as you have the same files across all replica this is fine
21:25 Guest71514 ah, it does work then -- cool
21:25 CyrilPeponnet for distributed basicaly you don't really know where are your files
21:25 Guest71514 ah, that's true
21:25 Guest71514 i have a 2-replica setup
21:25 Guest71514 awesome.
21:25 Guest71514 thanks so much
21:29 cholcombe joined #gluster
21:39 hagarth joined #gluster
21:42 rjoseph joined #gluster
21:45 obnox joined #gluster
21:50 soumya joined #gluster
22:20 julim joined #gluster
22:24 nsoffer joined #gluster
23:02 marcoceppi joined #gluster
23:02 marcoceppi joined #gluster
23:04 PaulCuzner joined #gluster
23:12 Kins joined #gluster
23:14 glusterbot News from newglusterbugs: [Bug 1029597] Geo-replication not work, rsync command error. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1029597>
23:25 msmith_ joined #gluster
23:31 gildub joined #gluster
23:44 glusterbot News from newglusterbugs: [Bug 1221390] Replication is active but skips files and Rsync reports errcode 23 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221390>
23:55 crashmag joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary