Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 wushudoin joined #gluster
00:38 _pol joined #gluster
00:48 yinyin joined #gluster
00:48 yosafbridge joined #gluster
00:50 bala joined #gluster
00:55 lpabon joined #gluster
00:56 vpshastry joined #gluster
01:09 satheesh joined #gluster
01:11 awheeler joined #gluster
01:15 bulde joined #gluster
01:21 jclift_ joined #gluster
01:24 chirino joined #gluster
01:28 kevein joined #gluster
01:32 lpabon joined #gluster
01:36 vpshastry left #gluster
01:40 bulde joined #gluster
01:42 satheesh joined #gluster
01:45 purpleidea joined #gluster
01:45 purpleidea joined #gluster
01:45 harish joined #gluster
01:50 harish joined #gluster
01:53 stopbit joined #gluster
01:54 chirino joined #gluster
01:58 stormbringer joined #gluster
02:01 lalatenduM joined #gluster
02:02 nightwalk joined #gluster
02:05 stigchristian joined #gluster
02:19 Kyreeth left #gluster
02:22 bharata joined #gluster
02:38 bulde joined #gluster
02:40 asias joined #gluster
02:48 badone_ joined #gluster
03:03 vshankar joined #gluster
03:14 bulde joined #gluster
03:15 harish joined #gluster
03:17 kshlm joined #gluster
03:25 shubhendu joined #gluster
03:37 vpshastry joined #gluster
03:38 karthik joined #gluster
03:38 shylesh joined #gluster
03:46 harish joined #gluster
03:47 itisravi joined #gluster
03:55 sgowda joined #gluster
04:02 dusmant joined #gluster
04:05 harish joined #gluster
04:08 awheeler joined #gluster
04:12 ppai joined #gluster
04:14 CheRi joined #gluster
04:28 lalatenduM joined #gluster
04:30 bala joined #gluster
04:31 kanagaraj joined #gluster
04:35 Chat9785 joined #gluster
04:37 rjoseph joined #gluster
04:47 ndarshan joined #gluster
04:50 nightwalk joined #gluster
04:51 meghanam joined #gluster
04:53 lalatenduM joined #gluster
04:54 aravindavk joined #gluster
04:57 harish joined #gluster
05:01 deepakcs joined #gluster
05:05 vpshastry joined #gluster
05:07 vijaykumar joined #gluster
05:15 RameshN joined #gluster
05:15 RameshN_ joined #gluster
05:15 RameshN__ joined #gluster
05:19 bulde joined #gluster
05:19 ababu joined #gluster
05:21 bulde joined #gluster
05:24 shruti joined #gluster
05:26 sahina joined #gluster
05:30 _pol joined #gluster
05:35 chirino joined #gluster
05:36 ndarshan joined #gluster
05:39 vpshastry left #gluster
05:41 hagarth joined #gluster
05:47 raghu joined #gluster
05:49 rastar joined #gluster
06:01 rgustafs joined #gluster
06:08 vimal joined #gluster
06:15 ababu joined #gluster
06:21 ppai joined #gluster
06:21 sgowda joined #gluster
06:25 saurabh joined #gluster
06:26 psharma joined #gluster
06:35 mooperd joined #gluster
06:40 kshlm joined #gluster
06:49 andreask joined #gluster
06:54 ekuric joined #gluster
06:55 _ndevos joined #gluster
06:56 _pol joined #gluster
06:58 zombiejebus joined #gluster
07:03 hybrid512 joined #gluster
07:03 guigui3 joined #gluster
07:04 ngoswami joined #gluster
07:06 thomasle_ joined #gluster
07:09 ndarshan joined #gluster
07:12 eseyman joined #gluster
07:13 sgowda joined #gluster
07:19 vimal joined #gluster
07:25 harish joined #gluster
07:34 ujjain joined #gluster
07:35 eseyman joined #gluster
07:37 tjikkun_work joined #gluster
07:41 ntt_ joined #gluster
07:41 ntt_ hi
07:41 glusterbot ntt_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:44 ntt_ When i try to mount glusterfs (with -t glusterfs) it fails. My log file is -> http://pastebin.com/LMv3ff44  . Btw error is "[common-utils.c:211:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Name or service not known)". Someone can help me?
07:44 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
07:45 piotrektt joined #gluster
07:45 piotrektt joined #gluster
07:54 ngoswami joined #gluster
07:55 recidive joined #gluster
07:56 glusterbot New news from resolvedglusterbugs: [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY>
07:57 ngoswami joined #gluster
07:59 ngoswami joined #gluster
08:05 tziOm joined #gluster
08:08 Norky joined #gluster
08:08 neuroticimbecile joined #gluster
08:11 hagarth @channelstats
08:11 glusterbot hagarth: On #gluster there have been 165621 messages, containing 7027359 characters, 1173889 words, 4712 smileys, and 625 frowns; 1036 of those messages were ACTIONs. There have been 63325 joins, 1985 parts, 61335 quits, 20 kicks, 163 mode changes, and 7 topic changes. There are currently 216 users and the channel has peaked at 217 users.
08:13 neuroticimbecile hi all
08:15 samppah hey hey
08:15 hagarth ntt_: is there a DNS problem which is preventing resolution of a server IP/hostname from your client?
08:15 hagarth @hi
08:16 ntt_ hagarth: you're right. I'm testing in a local network, so when i mount with -t glusterfs i have to update /etc/hosts with the servers hostname
08:16 ntt_ solved. Thanks
08:16 hagarth ntt_: good to know!
08:17 ntt_ How can i test glusterfs performance? What are most used tools in a linux environment?
08:17 hagarth ntt_: depends on the workload that you plan to use with glusterfs.
08:17 hagarth neuroticimbecile: hello
08:18 neuroticimbecile i'm about to set up a gluster cluster, can't decide if i need each disk to be a brick (several bricks per node), or to RAID them together so that each node has only 1 brick.
08:19 ntt_ i would like to measure performance with a large number of files and with a big file (>4GB) from one client. There is a way to simulate this?
08:19 hagarth neuroticimbecile: most deployments make use of raided disks with fewer bricks per node
08:20 hagarth ntt_: you can use dd for creating files with a reasonable block size, say 64k or 128k, to determine performance.
08:20 neuroticimbecile hagarth: wouldn't that increase the brick size, and also increase the duration of rebuilds/heal ?
08:22 neuroticimbecile joined #gluster
08:22 hagarth neuroticimbecile: it does, but the number of glusterfsd processed would be lesser in such a scheme. Besides, it would be good to protect yourself from single disk failures by having raid in place.
08:24 jmeeuwen joined #gluster
08:27 bharata-rao joined #gluster
08:27 guigui3 joined #gluster
08:31 neuroticimbecile thanks hagarth
08:57 nightwalk_ joined #gluster
09:00 NuxRo neuroticimbecile: +1 for raided setup. also when a disk fails you just replace it and raid card takes care of it, no need to mess around with replacing bricks in gluster
09:01 guigui3 joined #gluster
09:03 ndarshan joined #gluster
09:05 neuroticimbecile thanks NuxRo, yes that's also what i've been hearing from the gluster intro videos
09:08 neuroticimbecile was just trying to ask around if there are other (advantageous) reasons not to go that route
09:09 zetheroo1 joined #gluster
09:38 guigui1 joined #gluster
09:46 neuroticimbecile may i ask you guys (who have gluster clusters), how many bricks do you have per node?
10:01 ndarshan joined #gluster
10:17 bulde joined #gluster
10:20 edward1 joined #gluster
10:20 shubhendu joined #gluster
10:37 dusmant joined #gluster
10:39 manik joined #gluster
10:45 ntt_ hagarth: I have a replica=2. When i try to write with dd, speed is X MB/s. If i stop service glusterd on a node, I obtain 2X MB/s. Is this normal?
10:46 spider_fingers joined #gluster
10:48 _ndevos ntt_: yes, a glusterfs client for a replica 2 volume writes to both bricks at the same time, if teh bandwidth is the bottle-neck, you see this 1/2 performance behaviour
10:49 ndevos joined #gluster
10:51 andreask joined #gluster
10:55 jclift joined #gluster
10:56 vimal joined #gluster
10:57 ababu joined #gluster
11:01 ntt_ _ndevos: ok. I forgot the bandwidth !
11:10 vimal joined #gluster
11:20 CheRi joined #gluster
11:23 ndarshan joined #gluster
11:23 hagarth joined #gluster
11:24 ppai joined #gluster
11:37 shruti_ joined #gluster
11:37 rcheleguini joined #gluster
11:44 meghanam joined #gluster
11:44 meghanam_ joined #gluster
11:52 bulde1 joined #gluster
11:57 guigui1 joined #gluster
12:11 itisravi_ joined #gluster
12:11 dusmant joined #gluster
12:23 sprachgenerator joined #gluster
12:25 vimal joined #gluster
12:25 harish joined #gluster
12:28 CheRi joined #gluster
12:29 bulde joined #gluster
12:29 glusterbot New news from resolvedglusterbugs: [Bug 976750] Disabling NFS causes E level errors in nfs.log. <http://goo.gl/iorWMy>
12:29 hagarth joined #gluster
12:33 manik joined #gluster
12:38 vpshastry1 joined #gluster
12:39 harish joined #gluster
12:39 aliguori joined #gluster
12:42 eseyman joined #gluster
12:48 awheeler joined #gluster
12:52 bennyturns joined #gluster
12:53 awheeler joined #gluster
12:59 bulde1 joined #gluster
12:59 glusterbot New news from resolvedglusterbugs: [Bug 819130] Merge in the Fedora spec changes to build one single unified spec <http://goo.gl/GfSUw>
13:10 Guest77900 joined #gluster
13:15 andreask joined #gluster
13:20 harish_ joined #gluster
13:22 Bluefoxicy joined #gluster
13:24 hagarth joined #gluster
13:25 vpshastry1 left #gluster
13:32 Guest77900 joined #gluster
13:40 sprachgenerator joined #gluster
13:43 bugs_ joined #gluster
13:43 sprachgenerator joined #gluster
13:51 ninkotech_ joined #gluster
13:51 jebba joined #gluster
13:51 bulde joined #gluster
13:52 kaptk2 joined #gluster
13:55 awheeler joined #gluster
13:56 harish_ joined #gluster
13:57 sjoeboo joined #gluster
13:58 ujjain joined #gluster
14:01 mohankumar joined #gluster
14:01 glusterbot New news from resolvedglusterbugs: [Bug 810079] Handle failures of open fd migration from old graph to new graph <http://goo.gl/Fl7fh> || [Bug 767276] object-storage: using https for swift-plugin <http://goo.gl/azgheQ> || [Bug 768906] object-storage: swift-plugin rpms should copy configuration files under /etc/swift <http://goo.gl/C3RWd> || [Bug 768941] object-storage:parameter updation inside /etc/swift/swift.conf <http://goo.gl/vf9od> || [
14:04 neuroticimbecile left #gluster
14:04 jruggiero joined #gluster
14:05 jruggiero left #gluster
14:09 shylesh joined #gluster
14:12 shubhendu joined #gluster
14:17 jruggiero joined #gluster
14:17 jruggiero left #gluster
14:20 harish_ joined #gluster
14:20 manik joined #gluster
14:22 spider_fingers left #gluster
14:22 wushudoin joined #gluster
14:25 dscastro joined #gluster
14:26 5EXAAD5L7 joined #gluster
14:26 18WAEBZ74 joined #gluster
14:30 tqrst joined #gluster
14:31 awheele__ joined #gluster
14:31 glusterbot New news from resolvedglusterbugs: [Bug 950083] Merge in the Fedora spec changes to build one single unified spec <http://goo.gl/tajoiQ>
14:42 hagarth joined #gluster
14:44 awheeler joined #gluster
14:51 zaitcev joined #gluster
14:51 sac_ avati, ping
14:53 xymox joined #gluster
15:02 vpshastry joined #gluster
15:02 vpshastry left #gluster
15:03 Guest77900 joined #gluster
15:06 dewey joined #gluster
15:06 jack joined #gluster
15:12 eseyman joined #gluster
15:12 awheeler joined #gluster
15:18 harish joined #gluster
15:20 bala joined #gluster
15:20 awheele__ joined #gluster
15:26 plarsen joined #gluster
15:27 zetheroo1 left #gluster
15:29 awheeler joined #gluster
15:30 B21956 joined #gluster
15:30 awheeler joined #gluster
15:32 _pol joined #gluster
15:33 tjstansell1 left #gluster
15:35 B21956 left #gluster
15:36 vshankar joined #gluster
15:43 Norky joined #gluster
15:44 sprachgenerator joined #gluster
15:46 ngoswami joined #gluster
15:50 _pol joined #gluster
15:50 mohankumar joined #gluster
15:52 saurabh joined #gluster
15:52 awheele__ joined #gluster
16:01 Norky joined #gluster
16:05 rotbeard joined #gluster
16:07 neofob joined #gluster
16:10 bulde joined #gluster
16:14 elfar joined #gluster
16:18 tobias- joined #gluster
16:18 tobias- quick question; after gluster volume delete <vol> i can't create a new one with the same name; am I missing something? google doesnt give me much..
16:20 vincent_vdk joined #gluster
16:20 tobias- answer to myself: It is prohibited due to prevent dataloss. you can delete the export-dirs to be able to recreate them
16:33 awheeler joined #gluster
16:40 dusmant joined #gluster
16:44 Guest77900 joined #gluster
16:44 Mo_ joined #gluster
16:45 lpabon joined #gluster
16:52 wushudoin left #gluster
16:55 awheele__ joined #gluster
17:08 _pol joined #gluster
17:08 vpshastry joined #gluster
17:09 vpshastry left #gluster
17:09 _pol_ joined #gluster
17:10 plarsen joined #gluster
17:10 awheeler joined #gluster
17:12 thomaslee joined #gluster
17:15 NcA^ joined #gluster
17:15 dewey_ joined #gluster
17:16 Technicool joined #gluster
17:19 theron joined #gluster
17:30 awheele__ joined #gluster
17:35 eladiomendez joined #gluster
17:35 tjstansell1 joined #gluster
17:39 tjstansell semiosis and JoeJulian: regarding yesterday's issues rebuilding a node in a replica cluster, i added a small loop between my initial peer probe and calling gluster restart and wait for the peer status to change to "Peer in Cluster"
17:40 tjstansell the first time i tried a kickstart with this in place, it immediately showed up in that state after peer probe... and the rest of the script worked fine ... it rejoined the cluster, started the bricks, healed, etc.
17:41 tjstansell the second time, it didn't change to Peer in Cluster (i wasn't logging what the actual state was, so not sure) for 5 seconds (i checked every second) ... it then proceeded and managed to join the cluster just fine.
17:41 dhsmith joined #gluster
17:42 tjstansell interestingly, in that second case, after the gluster restart, it took at least a second for it to see volume info ... whereas the first time it saw volume info immediately.
17:43 tjstansell so i'm starting to think that there are just some intrinsic delays in the negotiations/transfer of volume info that, especially when scripted, can cause issues.
17:44 tjstansell semiosis: this goes back to your comment that the procedure for rebuilding a node with the same hostname only works ~90% of the time.
17:44 Twinkies joined #gluster
17:50 manik joined #gluster
17:53 lalatenduM joined #gluster
17:54 edward1 joined #gluster
18:12 _pol_ joined #gluster
18:21 jthorne joined #gluster
18:26 awheeler joined #gluster
18:34 glusterbot New news from resolvedglusterbugs: [Bug 923556] 75% O_DIRECT sequential write performance regression <http://goo.gl/L1OYYe>
18:41 Humble joined #gluster
19:04 awheele__ joined #gluster
19:13 _pol joined #gluster
19:29 frakt joined #gluster
19:40 daMaestro joined #gluster
19:43 Twinkies joined #gluster
19:44 Twinkies -
19:44 semiosis +
19:45 Twinkies *
20:19 awheeler joined #gluster
20:21 dberry joined #gluster
20:36 awheele__ joined #gluster
20:45 JoeJulian semiosis: https://twitter.com/JoeCyberG​uru/status/365211542314172416
20:45 glusterbot <http://goo.gl/t6bBAp> (at twitter.com)
20:45 * JoeJulian gets tired of linear thinking.
20:55 semiosis JoeJulian: the solution is simple, background self heal count to 2, heal alg to full
20:55 semiosis otherwise yeah all that replication will saturate your links & maybe cpu too
20:55 semiosis ec2 is not forgiving in that way
20:56 JoeJulian I know heal is at a lower priority than client access, so it shouldn't even matter.
20:56 semiosis heh
20:56 JoeJulian (in 3.3+)
20:57 JoeJulian I'm just tired of the "I can't do it this way, so it's the tool's fault!" mentality.
20:57 semiosis hahahahaa
20:57 _pol_ joined #gluster
20:57 semiosis my standard response to that is, "well it works great for plenty of people"
20:57 JoeJulian yep
20:57 chirino joined #gluster
20:58 semiosis i need ,,(undocumented options)
20:58 glusterbot Undocumented options for 3.4: http://goo.gl/Lkekw
20:59 JoeJulian i need ,,(Whiskey & Coke)
20:59 glusterbot JoeJulian: Error: No factoid matches that key.
21:01 dhsmith_ joined #gluster
21:04 JoeJulian Actually, the one I want to look at is the "we want N replicas because we have N web servers and ..." I'm not really sure where most of them are going with that one.
21:08 andreask joined #gluster
21:08 andreask joined #gluster
21:14 semiosis JoeJulian: scalability!  get with it!
21:23 jporterfield joined #gluster
21:25 dhsmith joined #gluster
21:31 recidive joined #gluster
21:35 glusterbot New news from resolvedglusterbugs: [Bug 844761] Samba hook script adds redundant sections in smb.conf <http://goo.gl/tl292>
21:58 JoeJulian semiosis: Ok, never been much of a ubuntu fan, but this actually has me wanting to invest in them: http://igg.me/at/ubuntuedge
21:58 glusterbot Title: Ubuntu Edge | Indiegogo (at igg.me)
21:58 semiosis +1
22:06 _pol joined #gluster
22:07 _pol joined #gluster
22:11 _pol joined #gluster
22:27 tziOm joined #gluster
22:36 RiW joined #gluster
22:36 RiW left #gluster
23:20 joshit_ joined #gluster
23:21 joshit_ wondering if anyone can help me with gluster
23:23 joshit_ we are having the gluster native client freeze when one server node reboots, centos 6.4
23:24 JoeJulian Your network is stopping before your bricks.
23:24 joshit_ so ls /dir not available till server fully boots again
23:25 JoeJulian or 42 seconds pass. Whichever comes first.
23:25 joshit_ how do we fix this
23:25 joshit_ i have done many fresh centos 6.4 tests with fail everytime
23:25 JoeJulian Did you install from the ,,(yum repo)?
23:25 glusterbot The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
23:25 joshit_ yes gluster epel repo
23:25 MugginsM joined #gluster
23:26 joshit_ straight from gluster
23:26 joshit_ i have even tried from git, but all fail same problem
23:27 JoeJulian ls /etc/rc.d/rc6.d | egrep '(gluster|network)'
23:29 joshit_ K80glusterd & K90network
23:29 JoeJulian Ah, there's your problem. K80glusterfsd is missing...
23:29 awheeler joined #gluster
23:30 joshit_ the newest rpm's 3.4-08 removed the glusterfsd script
23:30 joshit_ and we still had the issue prior to the update
23:31 JoeJulian argh...
23:31 jebba joined #gluster
23:32 JoeJulian kkeithley: ^
23:32 JoeJulian ... though he's on east coast time so he's probably not around...
23:32 joshit_ JoeJulian do you have a centos 6.4 setup?
23:32 joshit_ working setup?
23:33 JoeJulian I'm still on 3.3
23:33 joshit_ i have even built 3.3.2 from git on fresh centos 6.4 install and mnt freezes on reboot
23:35 joshit_ maybe you can share some light on your /etc/rc.d/rc6.d
23:36 JoeJulian yep, he took it out of 3.3.2 too. Argh.
23:36 JoeJulian me: file a bug
23:36 glusterbot http://goo.gl/UUuCq
23:36 joshit_ well glusterfsd script is in 3.3 then it was also in original 3.4 now 3.4-08 comes along the glusterfsd scripts are gone
23:36 joshit_ but all versions of 3.4 gives us freeze on boot
23:37 JoeJulian Having the glusterfsd init script is necessary because it's used at runlevel 0 and 6 to kill the bricks.
23:37 joshit_ even 3.3 on centos 6.4 freezes on reboot
23:37 joshit_ i mean the mount
23:38 JoeJulian put this in /etc/rc.d/init.d/glusterfsd then "chkconfig --add glusterfsd" http://ur1.ca/ey0y3
23:38 glusterbot Title: #30715 Fedora Project Pastebin (at ur1.ca)
23:39 JoeJulian 3.3.1 had that init script (which is where I just pasted it from)
23:41 joshit_ thanks will test
23:41 joshit_ I know you from your blog and reading your glusterfs articles very informative :)
23:42 JoeJulian Thanks. I try. :)
23:50 joshit_ added the script
23:50 joshit_ chkconfig the service
23:50 joshit_ remounted mount -t glusterfs blah blah
23:51 joshit_ reboot on server
23:51 joshit_ did ls /mnt/dir
23:51 joshit_ freezes
23:51 joshit_ everytime
23:51 joshit_ however if we manually stop glusterfsd then reboot then ls /mnt/dir its ok
23:51 joshit_ however the automatic way freezes
23:52 JoeJulian Let me throw together some test VMs and see what's happening.
23:53 joshit_ even with chkconfig override without success
23:54 joshit_ which os do you have gluster running on?
23:54 JoeJulian CentOS 6.4
23:55 joshit_ what other services on shutdown could affect the brick freezing?
23:55 joshit_ this is complete fresh install
23:56 joshit_ ive tested on fresh arch and works perfectly as it should
23:56 joshit_ mageia also works
23:57 JoeJulian It's all about the TCP connection not getting closed. Without that the client doesn't know the server is gone until it satisfies a ping-timeout.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary