Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 cholcombe joined #gluster
00:16 rastar joined #gluster
00:48 overyander joined #gluster
01:06 shdeng joined #gluster
01:28 rastar joined #gluster
01:45 vinurs joined #gluster
01:56 MrAbaddon joined #gluster
02:04 jbrooks joined #gluster
02:09 Jules- joined #gluster
02:09 daMaestro joined #gluster
02:13 kramdoss_ joined #gluster
02:19 rastar joined #gluster
02:21 plarsen joined #gluster
02:24 MrAbaddon joined #gluster
02:32 derjohn_mob joined #gluster
02:41 arpu joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:59 rastar joined #gluster
03:01 ppai joined #gluster
03:09 rastar joined #gluster
03:27 rastar joined #gluster
03:41 oajs_ joined #gluster
03:45 atinm joined #gluster
03:48 prasanth joined #gluster
03:54 itisravi joined #gluster
03:56 dominicpg joined #gluster
03:59 rastar joined #gluster
04:04 gyadav joined #gluster
04:07 farhorizon joined #gluster
04:25 rastar joined #gluster
04:28 skumar joined #gluster
04:41 Shu6h3ndu joined #gluster
04:41 d0nn1e joined #gluster
04:42 RameshN joined #gluster
04:44 karthik_us joined #gluster
04:51 jiffin joined #gluster
04:52 nishanth joined #gluster
05:08 BitByteNybble110 joined #gluster
05:09 aravindavk joined #gluster
05:12 Prasad joined #gluster
05:15 ashiq joined #gluster
05:17 ndarshan joined #gluster
05:17 msvbhat joined #gluster
05:24 Saravanakmr joined #gluster
05:27 apandey joined #gluster
05:28 rafi joined #gluster
05:28 rastar joined #gluster
05:29 atmosphere joined #gluster
05:29 atm0sphere joined #gluster
05:31 rafi1 joined #gluster
05:49 rastar joined #gluster
05:57 sbulage joined #gluster
06:06 [diablo] joined #gluster
06:07 buvanesh_kumar joined #gluster
06:11 rjoseph joined #gluster
06:12 hgowtham joined #gluster
06:13 nbalacha joined #gluster
06:15 nbalacha joined #gluster
06:17 anbehl joined #gluster
06:17 rafi1 joined #gluster
06:18 susant joined #gluster
06:27 atm0sphere joined #gluster
06:30 Wizek_ joined #gluster
06:31 ksandha_ joined #gluster
06:35 pioto joined #gluster
06:37 susant joined #gluster
06:38 sanoj joined #gluster
06:42 atm0s joined #gluster
06:44 atm0sphere joined #gluster
06:49 rastar joined #gluster
06:53 mbukatov joined #gluster
06:54 atm0sphere joined #gluster
06:55 pioto joined #gluster
06:56 ankitr joined #gluster
06:59 Saravanakmr joined #gluster
07:07 msvbhat joined #gluster
07:10 Philambdo joined #gluster
07:15 mhulsman joined #gluster
07:15 mhulsman joined #gluster
07:16 mhulsman1 joined #gluster
07:18 kotreshhr joined #gluster
07:26 jtux joined #gluster
07:45 mhulsman joined #gluster
07:47 mhulsman1 joined #gluster
07:50 jtux joined #gluster
07:55 armin_ joined #gluster
07:56 pulli joined #gluster
07:57 ahino joined #gluster
07:58 pasik joined #gluster
08:02 Ulrar joined #gluster
08:08 ivan_rossi joined #gluster
08:16 Akram joined #gluster
08:16 Edo4567 joined #gluster
08:20 nishanth joined #gluster
08:23 Edo4567 Hi all! I have a slight performance problem and I'd like to know if it's normal because of the environment or if I did something wrong
08:24 cholcombe_ joined #gluster
08:26 Edo4567 time find | wc -l -> 51 ->  real    0m2.738s user    0m0.000s sys     0m0.000s
08:27 Edo4567 2.7 seconds for 51 (still empty) folders
08:27 k4n0 joined #gluster
08:29 anbehl joined #gluster
08:37 anbehl joined #gluster
08:37 atm0sphere joined #gluster
08:41 ivan_rossi left #gluster
08:44 nbalacha Edo4567, how many bricks do you have in your volume?
08:45 fsimonce joined #gluster
08:50 Edo4567 nbalacha: 2 x 1 replicated bricks. The RTT is ~30ms between the 2 nodes.
08:50 buvanesh_kumar joined #gluster
08:50 nbalacha Edo4567, so, a pure distribute volume
08:50 flying joined #gluster
08:51 anbehl joined #gluster
08:51 nbalacha Edo4567, there are known issues with the readdir/p performance in gluster.
08:51 nbalacha that is being worked on
08:51 Edo4567 nbalacha: sorry, no, it's a pure replicated volume (is it written 1 x 2 ?)
08:52 nbalacha Edo4567, yes, pure replicate would be 1x2 :)
08:53 nbalacha let me see if I can find the dev working on that
08:54 nbalacha In the meantime, would you be willing to take a network trace and send that across?
08:54 Edo4567 I tried to follow this suggestions to enable md-cache: http://blog.gluster.org/2016/10/gluste​r-tiering-and-small-file-performance/ but nothing changed
08:54 glusterbot Title: Gluster tiering and small file performance | Gluster Community Website (at blog.gluster.org)
08:54 pulli joined #gluster
08:55 Edo4567 nbalacha: sorry, what is a network trace? Is the one obtained by volume profiling?
08:56 nbalacha Edo4567, I was thinking of a tcpdump
08:56 nbalacha looks like the dev working on this is currently busy
08:57 buvanesh_kumar joined #gluster
08:57 nbalacha could you send an email to gluster-devel@gluster.org with a description of the issue?
08:57 nbalacha Edo4567, ^^^
08:59 hexasoft joined #gluster
09:00 hexasoft hello
09:00 glusterbot hexasoft: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:00 hexasoft polit people say HELO, bot.
09:01 hexasoft I have a problem compiling 3.10.0 on debian 8.6 (64b)
09:02 hexasoft I get the sources, extracted then. If I just run ./configure I get "configure: WARNING: cache variable ac_cv_build contains a newline"
09:03 hexasoft if I run ./autogen.sh before it complains a lot and configure failed with: "./configure: line 13075: syntax error near unexpected token `UUID," + "./configure: line 13075: `PKG_CHECK_MODULES(UUID, uuid,'"
09:04 ppai joined #gluster
09:05 Edo4567 nbalacha: thank you! what should I filter in tcpdump? Anyway if you tell me it's a known issue, I'll start to look at other solutions.
09:05 nbalacha Edo4567, the issue we are working on is mainly seen in distributed volumes
09:05 nbalacha so this could be different
09:05 nbalacha it would be worth having the dev take a look at it
09:05 nbalacha tcpdump - you can try filtering by glusterfs
09:12 sanoj joined #gluster
09:15 Edo4567 nbalacha: like this? tcpdump -w glusterfs.pcap -i any -s 0  tcp and portrange 24007-24100 (found googling)
09:16 magrawal joined #gluster
09:17 ankitr joined #gluster
09:17 derjohn_mob joined #gluster
09:19 buvanesh_kumar_ joined #gluster
09:21 Edo4567 nbalacha: I'll collect some info and contact the mail you suggested, but now I haven't got the time. This evening maybe. Thank you for now!
09:23 glusterbot` joined #gluster
09:31 ahino joined #gluster
09:35 MadPsy joined #gluster
09:35 MadPsy joined #gluster
09:41 glusterbot joined #gluster
09:42 skumar_ joined #gluster
09:43 hybrid512 joined #gluster
09:45 Saravanakmr joined #gluster
09:53 jkroon joined #gluster
09:54 Seth_Karlo joined #gluster
09:57 RameshN joined #gluster
09:59 rastar joined #gluster
10:00 atm0sphere joined #gluster
10:03 Seth_Karlo joined #gluster
10:04 buvanesh_kumar_ joined #gluster
10:05 nimda_ joined #gluster
10:06 nimda_ Hello, I am completely new to glusterfs. I had 2 replicated bricks on 2 servers (1 on each). I added another one to the cluster and increased the replicat to 3.
10:06 nimda_ do i need to rebalance the volume?
10:07 nimda_ one of the nodes has high system time . I used strace and found out futex function is taking 80% of the time.
10:10 john4 jiffin: (a while later), I just tested the auth.allow thing in two fresh VM and even there it's stopping all clients, something must be seriously wrong somewhere, maybe the build? or me?
10:10 ashiq joined #gluster
10:14 Norky joined #gluster
10:18 skoduri joined #gluster
10:20 buvanesh_kumar_ joined #gluster
10:21 buvanesh_kumar joined #gluster
10:22 ndarshan joined #gluster
10:23 MrAbaddon joined #gluster
10:27 shruti joined #gluster
10:30 nimda_ joined #gluster
10:34 RameshN joined #gluster
10:35 skumar__ joined #gluster
10:35 Saravanakmr joined #gluster
10:35 MrAbaddon joined #gluster
10:40 kotreshhr left #gluster
10:44 msvbhat joined #gluster
10:54 skoduri joined #gluster
10:55 nbalacha nimda_, rebalance is not required for pure replicate volumes
10:56 nbalacha nimda_, if the volume info shows type 1x3 , it is a pure replicate
11:10 anbehl joined #gluster
11:15 anbehl joined #gluster
11:15 RameshN joined #gluster
11:17 mhulsman joined #gluster
11:20 Philambdo joined #gluster
11:21 skumar joined #gluster
11:21 ndarshan joined #gluster
11:22 jiffin john4: I hope it is not related to firewalld, can u try mounting client in the same server machine
11:24 msvbhat joined #gluster
11:25 gyadav joined #gluster
11:28 john4 jiffin: mounting through localhost works, even if not listed in allow
11:28 skoduri joined #gluster
11:28 jiffin john4: can u disable firewalld and try mounting?
11:29 john4 'am searching for it but I thing it's not by default on ubuntu 16.04
11:30 chris349 joined #gluster
11:30 john4 and iptables shows empty chains
11:30 hgowtham #REMINDER: gluster community bug triage to take place in 30 minutes at #gluster-meeting
11:30 mhulsman1 joined #gluster
11:32 nh2 joined #gluster
11:32 karthik_ joined #gluster
11:33 shyam joined #gluster
11:36 gyadav joined #gluster
11:41 john4 jiffin: maybe some input: if I "gluster volume set testallow auth.allow 192.168.122.186" then in brick logs I immediately get "0-testallow-server: unauthorized client, hence terminating the connection 192.168.122.186"
11:46 nimda_ joined #gluster
12:09 karthik_us joined #gluster
12:11 ankush joined #gluster
12:21 ankush joined #gluster
12:36 ahino joined #gluster
12:40 ira joined #gluster
12:51 thatgraemeguy joined #gluster
12:53 RameshN joined #gluster
12:54 kpease joined #gluster
12:56 kpease_ joined #gluster
12:59 Seth_Kar_ joined #gluster
13:06 Saravanakmr joined #gluster
13:16 atinm joined #gluster
13:20 jbrooks joined #gluster
13:20 skoduri joined #gluster
13:21 Drankis joined #gluster
13:24 john4 I got allowed = "192.168.122.186", received addr = "R" is this normal?
13:36 john4 according to source code it's not normal... ok
13:37 nishanth joined #gluster
13:38 john4 allowed = "192.168.122.186", received addr = "m"
13:38 john4 reading some random memory as client address?
13:39 ahino joined #gluster
13:40 rastar joined #gluster
13:43 john4 hum, between 3.9 and 3.10 the big switch/case filling peer_addr disappeared so it's indeed some random memory AFAICT
13:47 shyam joined #gluster
13:51 baber joined #gluster
13:53 john4 seems to have been removed with a patch about brick multiplexing
13:53 unclemarc joined #gluster
13:57 susant left #gluster
13:59 nimda_ [2017-03-07 13:59:19.558521] W [MSGID: 114031] [client-rpc-fops.c:2933:client3_3_lookup_cbk] 2-images-client-2: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]
14:00 nimda_ my log is filled with this line
14:00 nimda_ how can investigate it more?
14:03 plarsen joined #gluster
14:03 john4 ok found same report on bugzilla, will comment with what I've put here
14:05 Ashutto joined #gluster
14:08 mchangir joined #gluster
14:09 mchangir ALL: Gluster RPC Internals - Lecture #2 - starting NOW: Blue Jeans Meeting ID: 1546612044
14:19 nh2 joined #gluster
14:19 ppai joined #gluster
14:22 nbalacha joined #gluster
14:23 mhulsman joined #gluster
14:41 skylar joined #gluster
14:50 Philambdo joined #gluster
15:00 farhorizon joined #gluster
15:00 nh2 joined #gluster
15:11 atinm joined #gluster
15:14 arpu joined #gluster
15:26 hexasoft left #gluster
15:26 [diablo] joined #gluster
15:26 Karan joined #gluster
15:30 Karan joined #gluster
15:31 rastar joined #gluster
15:37 kenansulayman joined #gluster
15:40 msvbhat joined #gluster
15:43 nh2 joined #gluster
15:47 major JoeJulian, so .. I have pretty much all of the LVM stuff off in its own little lvm-snapshot library .. though some of the original code seems pretty questionable .. either I don't grok the "when" some of these functions are being called in the greater communication path, or some of this is just really .. hacky..
15:50 Seth_Karlo joined #gluster
15:51 major case and point .. it .. appears .. that the current "barrier" test for snapshot support is to see if the client can execute "lvcreate" ?
15:54 Seth_Karlo joined #gluster
15:54 cliluw joined #gluster
15:55 major seems like that test should be done at the brick...
15:58 pioto joined #gluster
16:01 pioto joined #gluster
16:02 susant joined #gluster
16:02 farhorizon joined #gluster
16:02 cliluw joined #gluster
16:05 nbalacha joined #gluster
16:10 rastar joined #gluster
16:10 Seth_Karlo joined #gluster
16:12 wushudoin joined #gluster
16:12 wushudoin joined #gluster
16:15 Seth_Kar_ joined #gluster
16:22 Gambit15 joined #gluster
16:24 cliluw joined #gluster
16:31 rafi joined #gluster
16:31 nbalacha joined #gluster
16:42 bhakti joined #gluster
16:43 bwerthmann joined #gluster
16:44 cliluw joined #gluster
16:44 bhakti joined #gluster
16:46 msvbhat joined #gluster
16:56 jkroon joined #gluster
17:04 mchangir joined #gluster
17:07 jbrooks joined #gluster
17:12 major okay .. found a semi-sane solution .. I think
17:14 major I "think" I can finally start hooking in btrfs checks
17:22 major erm .. nope .. one more commit ..
17:24 mchangir left #gluster
17:25 Humble joined #gluster
17:27 nh2 joined #gluster
17:30 alvinstarr joined #gluster
17:35 idef1x joined #gluster
17:40 nbalacha joined #gluster
17:41 d0nn1e joined #gluster
17:52 nh2 joined #gluster
17:55 cliluw joined #gluster
17:55 soloslinger left #gluster
17:55 idef1x joined #gluster
18:00 jiffin joined #gluster
18:03 bwerthmann joined #gluster
18:04 matt_ joined #gluster
18:04 kpease joined #gluster
18:06 matt_ hi, I have glusterfs as a root filesystem for a machine and when I try to compile gnutls i'm getting "sed: read error on stdin: No such file or directory" and "/bin/grep: (standard input): No such file or directory" and when I try to compile guile i'm getting this ... http://termbin.com/wdif
18:06 matt_ is there any type of file that gluster cant handle, like sockets or fifo's or links something?
18:07 matt_ the stdin is intresting, my /dev/stdin in a symlink to /proc/self/fd0
18:10 rwheeler joined #gluster
18:19 nh2 joined #gluster
18:20 Seth_Karlo joined #gluster
18:26 rwheeler joined #gluster
18:27 major /dev is on udev or devtmpfs?
18:31 cholcombe joined #gluster
18:34 john51 joined #gluster
18:37 decayofmind Hi, I run "heal info" on volume and noticed there's a lot of gfid entries without corresponding "files". Can I delete them?
19:02 rastar joined #gluster
19:04 kpease joined #gluster
19:07 farhorizon joined #gluster
19:08 nh2 joined #gluster
19:14 PatNarciso fellas - I'm considering 10g (fiber over copper) vs 40g infiniband.  as I dont have a 10g setup yet, there is a lot to take in/consider.
19:14 PatNarciso pricing for infiniband is about the same for a basic 10g fiber setup.  I'm thinking w a ubiquity switch.
19:15 PatNarciso reason I'm favoring fiber is the *slightly* swifter latency.
19:16 PatNarciso if ya have any feedback; or horror/success stories of your infiniband or 10g setup-- please share.
19:16 glusterbot PatNarciso: setup's karma is now -1
19:16 PatNarciso glusterbot:  as it should be.
19:18 plarsen joined #gluster
19:18 PatNarciso ... the infiniband docs I'm reading are from ~ 2007's, where infiniband was great for the 40gbps speed; and low hardware overhead.
19:19 PatNarciso then I see some docs suggesting infiniband is dead... and I question if I'm wasting my time researching.
19:20 vbellur joined #gluster
19:28 * PatNarciso sees, and is reminded of glusters built in support for infiniband.
19:30 * PatNarciso reads Mellanox's whitepaper on IO, compairing infiniband with other options.
19:37 nh2 joined #gluster
19:38 vbellur joined #gluster
19:38 vbellur joined #gluster
19:39 vbellur joined #gluster
19:39 vbellur joined #gluster
19:40 vbellur joined #gluster
19:41 vbellur joined #gluster
19:41 sona joined #gluster
19:43 k4n0 joined #gluster
19:43 JoeJulian IB should have lower latency
19:45 JoeJulian I think the only real advantage of ethernet is that you use ethernet everywhere. When you have the limited space and ports of a 1u server, it's hard to justify having both.
19:45 kpease joined #gluster
19:45 jiffin joined #gluster
19:46 JoeJulian If you have racks of servers accessing your storage, wiring up IB  too all your clients may be overkill.
19:47 JoeJulian Since the server-to-server data needs are relatively small, there's no major advantage to using IB between servers.
19:47 JoeJulian Anyway... I'm going home. bbl.
19:48 PatNarciso Thanks JoeJulian.
19:48 msvbhat joined #gluster
19:48 * PatNarciso picked up 2x https://www.amazon.com/gp/product/B00TKR7PRK/ for $900 ea.  Loading them up with disks.  Deff not rack friendly.
19:51 nishanth joined #gluster
19:52 ahino joined #gluster
19:58 nh2 joined #gluster
19:58 cliluw joined #gluster
19:59 vbellur joined #gluster
20:00 Vapez_ joined #gluster
20:11 mhulsman joined #gluster
20:22 moneylotion joined #gluster
20:24 DV joined #gluster
20:25 sona joined #gluster
20:29 farhorizon joined #gluster
20:37 vbellur joined #gluster
20:40 aivaras joined #gluster
20:43 om2 joined #gluster
20:48 major okay .. soo .. I think I get to totally cheat and just ammend the mnt_opts to make btrfs work
20:49 major though .. on a volume restore its gonna get .. weird
20:49 * major thinks.
20:49 aivaras Hi guys, I'm writing Python script for mounting remote Gluster volume on Linux FS, can someone can suggest what's the best why to do that, without using Linux commands. I didn't found any Python libs for that, so thinking using C libs is it best way?
21:04 baber joined #gluster
21:12 panina joined #gluster
21:13 linuxaddicts joined #gluster
21:16 moneylotion joined #gluster
21:22 nh2 joined #gluster
21:30 moneylotion joined #gluster
21:35 mhulsman joined #gluster
22:09 farhorizon joined #gluster
22:11 PatNarciso aivaras, I don't think i'm able to help... altho I'm curious: why avoid linux commands?
22:13 Vaizki joined #gluster
22:49 overyander joined #gluster
23:01 pioto joined #gluster
23:03 Klas joined #gluster
23:10 vbellur joined #gluster
23:10 vbellur joined #gluster
23:11 vbellur joined #gluster
23:11 vbellur joined #gluster
23:12 vbellur joined #gluster
23:13 vbellur1 joined #gluster
23:16 cliluw joined #gluster
23:20 BitByteNybble110 joined #gluster
23:23 amarts joined #gluster
23:26 Iouns joined #gluster
23:26 PTech joined #gluster
23:28 vbellur joined #gluster
23:35 Wizek_ joined #gluster
23:41 JPaul joined #gluster
23:50 vbellur joined #gluster
23:52 vbellur joined #gluster
23:52 JPaul joined #gluster
23:55 vbellur joined #gluster
23:55 vbellur joined #gluster
23:56 vbellur joined #gluster
23:59 JPaul joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary