Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 TrDS left #gluster
00:11 sputnik13 joined #gluster
00:13 MacWinner joined #gluster
00:17 diegows joined #gluster
00:27 sputnik13 joined #gluster
00:33 gildub joined #gluster
00:35 calisto joined #gluster
00:42 sputnik13 joined #gluster
00:46 feeshon joined #gluster
01:24 _Bryan_ joined #gluster
01:33 fubada purpleidea: i just tried your module on puppetmaster 3.7.3 and it worked. However, same module set on PuppetServer 0.4.0 is not working
01:33 fubada with that Array vs String issue on @interfaces
01:33 fubada jkust fyi
01:41 nishanth joined #gluster
01:57 haomaiwa_ joined #gluster
02:02 harish joined #gluster
02:03 newdave joined #gluster
02:03 bala joined #gluster
02:03 newdave hi all - is it possible to setup a RAID-10 type storage scenario using gluster?
02:03 newdave ie. replicas 2 stripes 2
02:03 newdave ?
02:04 newdave (4 servers)
02:09 feeshon joined #gluster
02:18 lanning you can do that, but don't expect it to behave the way RAID10 does.
02:20 lalatenduM joined #gluster
02:25 newdave lanning: how so? i'm aiming for data redundancy with some possible added performance using 4x gluster servers:
02:26 newdave http://pastebin.com/cUGr8UVr
02:26 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
02:27 newdave weird thing is I can mount the volume from one of clients (which is only running gluster 3.0.2) but all other clients (running same version of the server - 3.4.5) throw that error
02:35 tg2 anybody using disperse in production?
02:47 badone joined #gluster
03:33 DV joined #gluster
03:49 shubhendu joined #gluster
03:58 hagarth joined #gluster
04:04 ppai joined #gluster
04:06 RameshN joined #gluster
04:12 itisravi joined #gluster
04:12 spandit joined #gluster
04:15 MacWinner joined #gluster
04:23 kanagaraj joined #gluster
04:33 lalatenduM joined #gluster
04:34 RameshN joined #gluster
04:35 anoopcs joined #gluster
04:36 hagarth joined #gluster
04:38 bala joined #gluster
04:39 meghanam joined #gluster
04:40 kshlm joined #gluster
04:45 leo__ joined #gluster
04:48 B21956 joined #gluster
04:48 jiffin joined #gluster
04:49 anoopcs joined #gluster
04:59 jbrooks joined #gluster
05:10 rafi_kc joined #gluster
05:12 nbalacha joined #gluster
05:14 newdave joined #gluster
05:15 rafi1 joined #gluster
05:16 rafi_kc joined #gluster
05:16 soumya joined #gluster
05:37 coredump|br joined #gluster
05:37 bala joined #gluster
05:37 kdhananjay joined #gluster
05:39 plarsen joined #gluster
05:42 rafi1 joined #gluster
05:47 raghu joined #gluster
05:48 nbalacha joined #gluster
06:02 poornimag joined #gluster
06:10 soumya joined #gluster
06:12 nbalacha joined #gluster
06:13 Gorian joined #gluster
06:15 sac_ joined #gluster
06:18 zerick joined #gluster
06:21 anil joined #gluster
06:27 glusterbot News from newglusterbugs: [Bug 1176008] Directories not visible anymore after add-brick, new brick dirs not part of old bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1176008>
06:28 saurabh joined #gluster
06:41 sahina joined #gluster
06:48 rjoseph joined #gluster
06:49 pcaruana joined #gluster
06:52 alb0t joined #gluster
06:53 alb0t Anyone up this time of night?
06:57 glusterbot News from newglusterbugs: [Bug 1176011] Client sees duplicated files <https://bugzilla.redhat.com/show_bug.cgi?id=1176011>
07:00 TvL2386 joined #gluster
07:05 alb0t Greetings!
07:06 atalur joined #gluster
07:06 rjoseph joined #gluster
07:10 ctria joined #gluster
07:21 aravindavk joined #gluster
07:22 jtux joined #gluster
07:36 Philambdo joined #gluster
07:38 [Enrico] joined #gluster
07:41 lalatenduM joined #gluster
07:49 overclk joined #gluster
07:51 bala joined #gluster
07:54 SOLDIERz joined #gluster
07:57 deniszh joined #gluster
08:02 ricky-ti1 joined #gluster
08:03 atalur joined #gluster
08:06 Gorian joined #gluster
08:13 Philambdo joined #gluster
08:25 Philambdo joined #gluster
08:31 atalur joined #gluster
08:47 Guest17459 joined #gluster
08:55 liquidat joined #gluster
09:02 jaank joined #gluster
09:07 mbukatov joined #gluster
09:08 nwe joined #gluster
09:09 sahina joined #gluster
09:10 nwe hello, I have setup two servers with glusterfs replication, and mount it up on the third.. with mount -t glusterfs disks.example.com:/storage /mount/point/ but know when I create files the disk speed is 58/m sec.. any idea how I can increase the performence? I using lacp with bond-mode 4..
09:11 Slashman joined #gluster
09:16 ghenry joined #gluster
09:18 mator nwe, http://rhsummit.files.wordpress.com/2013/06/rao_t_0340_tuning_rhel_for_databases3.pdf
09:19 mator read for "performance monitoring"
09:19 mator vmstat / iostat / sar
09:23 an joined #gluster
09:24 aravindavk joined #gluster
09:26 nwe mator: okey I will take a look on that pdf thanks!
09:33 Norky joined #gluster
09:38 SOLDIERz joined #gluster
09:47 an_ joined #gluster
10:09 Humble joined #gluster
10:23 ppai joined #gluster
10:25 SOLDIERz joined #gluster
10:25 lyang0 joined #gluster
10:26 DV joined #gluster
10:29 an joined #gluster
10:36 DV joined #gluster
10:47 an_ joined #gluster
10:48 an joined #gluster
10:48 karnan joined #gluster
10:52 LebedevRI joined #gluster
10:54 an joined #gluster
10:58 glusterbot News from newglusterbugs: [Bug 1176062] Force replace-brick lead to the persistent write(use dd) return Input/output error <https://bugzilla.redhat.com/show_bug.cgi?id=1176062>
11:02 an_ joined #gluster
11:09 karnan joined #gluster
11:11 an joined #gluster
11:13 [Enrico] joined #gluster
11:24 kkeithley1 joined #gluster
11:27 DV joined #gluster
11:33 ira joined #gluster
11:36 calum_ joined #gluster
11:38 DV joined #gluster
11:41 an joined #gluster
11:43 Gorian joined #gluster
11:43 calum_ joined #gluster
11:45 calum_ Im getting what sounds like mains hum on a clients dahdi channels. Is this likely to be an interface card issue, on the line, or needing an earth link between the phone system and the telephone line?
11:46 Humble joined #gluster
11:46 calum_ sorry meant for #asterisk... please ignore
11:47 kovshenin joined #gluster
11:49 atalur joined #gluster
11:49 diegows joined #gluster
12:01 elico joined #gluster
12:11 itisravi joined #gluster
12:11 Arrfab joined #gluster
12:14 an joined #gluster
12:19 edward1 joined #gluster
12:30 atalur joined #gluster
12:31 an joined #gluster
12:32 an_ joined #gluster
12:38 an joined #gluster
12:41 bene joined #gluster
12:43 an_ joined #gluster
12:44 Gorian joined #gluster
12:44 an joined #gluster
12:45 chirino joined #gluster
12:46 edong23_ joined #gluster
12:46 tdasilva joined #gluster
12:47 saltsa joined #gluster
12:47 codex joined #gluster
12:50 smohan joined #gluster
12:50 zerick joined #gluster
12:55 hagarth_ joined #gluster
12:56 coredump joined #gluster
12:57 anoopcs joined #gluster
12:58 jdarcy joined #gluster
12:59 glusterbot News from resolvedglusterbugs: [Bug 1175641] mount.glusterfs fails <https://bugzilla.redhat.com/show_bug.cgi?id=1175641>
13:00 Slashman_ joined #gluster
13:10 leo__ joined #gluster
13:19 partner_ umm any idea why would i get "no space left on device" while there's still plenty available? v3.4.5 and replica 2 volume
13:19 partner_ 4.7T  4.6T  101G  98% /srv-data
13:20 partner_ same numbers on the bricks, seems the issue is on the glusterfs server side as its "full" there aswell.
13:20 partner_ i'm not aware of any superuser reservations on XFS, logs just keep saying its full but i have no idea why, not out of inodes either
13:31 kovshenin joined #gluster
13:33 partner_ LVM is in between
13:40 rjoseph joined #gluster
13:47 julim joined #gluster
13:49 sac_ joined #gluster
13:53 nbalacha joined #gluster
13:58 B21956 joined #gluster
13:59 fandi joined #gluster
14:05 anti[Enrico] joined #gluster
14:05 anti[Enrico] joined #gluster
14:13 deniszh joined #gluster
14:17 bennyturns joined #gluster
14:19 virusuy joined #gluster
14:22 mator 3.5.3 try wasn't successful
14:22 mator going back to 3.2.7
14:25 meghanam joined #gluster
14:30 deniszh joined #gluster
14:31 sputnik13 joined #gluster
14:31 plarsen joined #gluster
14:32 lpabon joined #gluster
14:34 plarsen joined #gluster
14:39 atalur joined #gluster
14:39 deniszh joined #gluster
14:45 n-st joined #gluster
14:45 DV joined #gluster
14:47 deniszh joined #gluster
14:48 hagarth_ joined #gluster
14:53 Gorian joined #gluster
14:55 harish joined #gluster
15:00 coredump joined #gluster
15:03 lalatenduM_ joined #gluster
15:09 l0uis partner_: you're 98% full, its possible even though the volume has 2% free space some of your bricks dont. (I think this can cause your problem, Im no expert tho)
15:13 georgeh joined #gluster
15:13 wushudoin joined #gluster
15:16 _Bryan_ joined #gluster
15:17 msciciel_ joined #gluster
15:18 DV joined #gluster
15:30 soumya joined #gluster
15:33 deniszh joined #gluster
15:54 an joined #gluster
15:59 vimal joined #gluster
16:01 deniszh joined #gluster
16:08 kmai007 joined #gluster
16:08 kmai007 can someone point me to details of how the native fuse client works, any reading materials is great
16:09 kmai007 i know at a high level it writes to all storage nodes,
16:09 kmai007 but how does it perform reads
16:11 fandi kmai007: just search on search engine
16:12 fandi kmai007: i think this enough http://www.gluster.org/
16:13 kmai007 fandi: i searched, i just get high-level talk about FUSE native client
16:14 kkeithley1 native client is a glusterfs daemon with an xlator stack. At the bottom of the stack is a protocol/client xlator. At the top is a "secret" fuse bridge xlator. The fuse bridge xlator connects to fuse, and marshals all the I/O requests that come through the kernel from the apps.
16:14 kkeithley1 It looks very similar to the gnfs server. The main difference is that the gnfs server has a NFSv3 server xlator at the top of the xlator stack.
16:15 kmai007 so kkeithley in a read, would the fuse client ask all storage nodes, or just the 1st responder ?  I ran a tcpdump on a 'stat of a file in a volume' and I saw the request go out to all storage nodes
16:15 kmai007 just building a deep level of understanding
16:16 kmai007 maybe 'stat' is the wrong cmd to watch for, since it may envoke a "heal"
16:18 kkeithley_ What you're seeing is the lookup, which gets sent to all the servers for a volume. The rest of the read() is dispatched to the first server that responds
16:18 kmai007 thanks kke_
16:19 kmai007 kkeithley: do you know of any methods that I could "prove" or identify who the 1st responded is?
16:20 jbrooks joined #gluster
16:21 jbrooks joined #gluster
16:21 kkeithley_ apart from a tcpdump, no
16:21 kmai007 thanks kkeithley
16:23 _pol joined #gluster
16:42 lmickh joined #gluster
16:53 lyang0 joined #gluster
16:55 vimal joined #gluster
16:57 T3 joined #gluster
16:58 partner_ l0uis: its full locally to the gluster server, just don't know why. something reports something incorrectly if we talk about either space or inodes, no clear errors on any single log i've grepped through so far :/
16:59 l0uis partner_: so you've checked the actual bricks themselves and they aren't anywhere near 98% full ?
17:00 T3 guys, I have a 2-node replication setup, and I'm now populating it with data. I'm running rsync from an external server to node1 for a couple of days now, and it still have 1 or 2 days more to go (we can blame network, no problem here). My question is, I'm getting a huge gluster.log on node1. 300MB yesterday, 1GB today. gluster.log on node2 is like 3MB.
17:01 T3 Most of the entries on the big log are like this: [2014-12-19 16:56:48.426207] W [client-rpc-fops.c:1226:client3_3_removexattr_cbk] 0-site-images-client-1: remote operation failed: No data available
17:01 T3 the synchronization is doing good also.
17:02 T3 I'm just worried if this is something temporary due to the copy I'm doing, or if I can ignore them.
17:02 kmai007 T3 what version of gluster
17:02 partner_ l0uis: no, they are exactly 98% full both
17:02 T3 glusterfs 3.5.3 built on Nov 18 2014 03:53:25
17:02 partner_ everything shows same numbers
17:03 kmai007 you're rsyncing through a client mounted volume?
17:03 kmai007 i'm using 3.5.3 too and I've been having all kinds of noisy log files,
17:03 partner_ /dev/mapper/vg1-brick2                                  4.7T  4.6T  101G  98% /export/brick2
17:03 partner_ /dev/mapper/vg1-brick2                                 232469648 20700864 211768784    9% /export/brick2
17:03 kmai007 I was told to do 2 things
17:03 partner_ there is room, where does it go :o
17:03 T3 kmai007, actually my "gluster client" is on the same server of node1
17:03 kmai007 change the loggingin level on the glusterfs set feature cmd on the storage
17:04 T3 kmai007, I'm using INFO
17:04 kmai007 and the client to mount it with --logging-level=WARNING
17:04 T3 what are you using?
17:04 kmai007 or something of that nature
17:04 kmai007 i'm  still using INFO
17:04 T3 WARNING means more logging than INFO, right?
17:04 kmai007 i think it means only log levels WARNING and UP
17:05 kmai007 INFO is the chatty-est , next to debugging
17:05 T3 ohh
17:05 kmai007 i haven't done it yet
17:05 kmai007 ndevos told me to try that
17:05 kmai007 i actually asked tha tyesterday
17:05 l0uis partner_: im confused, if the underlying brick is reporting 98%, why do you think gluster is lying?
17:05 newdave joined #gluster
17:06 stus joined #gluster
17:06 stus hello
17:06 glusterbot stus: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:06 stus thanks for supporting so many distributions!
17:06 stus I have a question about Ubuntu packages
17:07 T3 I should definitely try that, kmai007
17:07 T3 thanks for the tip
17:07 stus what's the difference between the gluster and semiosis in lanchpad?
17:07 stus one only lists glusterfs releases up to 3.5, the other up to 3.6
17:08 l0uis stus: semiosis builds the ubuntu packages and puts them in the ppa, which are blessed (i guess?) by gluster
17:08 stus why are there two and what'st he purpose for each?
17:08 soumya joined #gluster
17:08 l0uis The ppa is separated by major version so no one gets any upgrade surprises.
17:09 stus l0uis thanks, although semiosis does not have a 3.6
17:09 l0uis He just hasn't gotten around to building it yet, presumably because its so new.
17:10 stus I see
17:11 stus so the one at https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.6, this hasn't been blessed yet ?
17:11 stus there's also a -qa
17:11 DV joined #gluster
17:11 _pol joined #gluster
17:12 stus I would like to use the latest release, 3.6 if possible, but not sure if I should rather use the one by ~semiosis
17:13 l0uis if you're on ubuntu i would use semiosis.
17:13 l0uis ask him and i'm sure he'll build 3.6
17:15 stus so I guess the builds by semiosis are more ubuntu-friendly, in order to make the transition to universe smooth in the future? Otherwise I don't understand why he would build it if there's already ppa:gluster/glusterfs-3.6 :)
17:15 stus thanks l0uis
17:18 l0uis stus: good question, I dont know. Maybe they changed policy w/ 3.6?
17:18 semiosis stus: the ~semiosis PPAs are old, I dont update them anymore.  new releases go into the ~gluster PPAs
17:18 stus semiosis: thanks!
17:18 semiosis yw
17:18 l0uis the download link on gluster.org still points to your ppa
17:18 stus I was a bit confused because they seem to get uploaded by the same user
17:18 semiosis ehhh
17:19 l0uis http://www.gluster.org/download/
17:19 stus yeah, plus that : )
17:20 l0uis semiosis: this is why iw as confused last night when I didn't see 3.5.3. You mentioned there was 3.5.3 in trusty, but I was looking at your ppa! :)
17:20 l0uis guess I need to switch over ...
17:21 stus thanks y'all, it's clear now
17:21 stus and thanks for the builds, semiosis, much appreciated :)
17:24 daMaestro joined #gluster
17:27 semiosis yw
17:28 semiosis someone emailed me this morning about how to automate launchpad builds
17:28 semiosis i need to work on that
17:40 jobewan joined #gluster
17:53 lmickh joined #gluster
17:53 claudioll joined #gluster
17:55 lalatenduM joined #gluster
18:09 jaank joined #gluster
18:29 ricky-ticky1 joined #gluster
18:54 ekuric joined #gluster
19:04 msciciel1 joined #gluster
19:06 bene2 joined #gluster
19:07 marcoceppi_ joined #gluster
19:07 siel_ joined #gluster
19:08 n-st_ joined #gluster
19:08 eclectic joined #gluster
19:08 DJCl34n joined #gluster
19:08 bet_ joined #gluster
19:08 georgeh_ joined #gluster
19:08 uebera|| joined #gluster
19:08 uebera|| joined #gluster
19:08 DJClean joined #gluster
19:09 chirino joined #gluster
19:09 verdurin joined #gluster
19:11 kalzz joined #gluster
19:21 shaunm joined #gluster
19:21 deniszh joined #gluster
19:33 sputnik13 joined #gluster
19:34 lpabon joined #gluster
19:45 tdasilva joined #gluster
20:07 lmickh joined #gluster
20:56 pstallworth joined #gluster
21:01 Philambdo joined #gluster
21:05 pstallworth can someone help me interpret volume profile results or point me to a reference of normal results i could compare to?
21:06 an joined #gluster
21:06 an joined #gluster
22:04 sputnik13 joined #gluster
22:20 an joined #gluster
22:20 plarsen joined #gluster
22:21 Intensity joined #gluster
22:34 rotbeard joined #gluster
22:41 XpineX joined #gluster
22:42 feeshon joined #gluster
22:47 T3 joined #gluster
22:48 XpineX joined #gluster
22:55 DV joined #gluster
23:03 calum_ joined #gluster
23:09 badone joined #gluster
23:26 XpineX joined #gluster
23:26 badone joined #gluster
23:46 ninkotech joined #gluster
23:46 ninkotech_ joined #gluster
23:52 XpineX joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary