Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:23 jporterfield joined #gluster
00:51 jporterfield joined #gluster
00:57 jporterfield joined #gluster
01:13 plarsen joined #gluster
01:25 recidive joined #gluster
01:34 hjmangalam1 yesterday I posted a long, badly formatted set of symptoms about what happens when 2/4 gluster servers go down and the fs continues to be active.  The short solution is to unmount/remount the glusterfs from all the clients.  that seems to have addressed almost all of the weirdnesses I mentioned.  see recent gluster list postings for the very long set of symptoms.
01:35 zandauxzan joined #gluster
01:59 MrNaviPacho joined #gluster
02:01 jporterfield joined #gluster
02:20 mattappe_ joined #gluster
02:28 jporterfield joined #gluster
02:31 zwu joined #gluster
02:42 harish__ joined #gluster
02:44 jporterfield joined #gluster
02:51 harish joined #gluster
03:00 bharata-rao joined #gluster
03:03 dusmant joined #gluster
03:05 r0b joined #gluster
03:21 saurabh joined #gluster
03:31 jporterfield joined #gluster
03:35 pingitypong joined #gluster
03:37 kanagaraj joined #gluster
03:39 dhyan joined #gluster
03:46 shubhendu joined #gluster
03:47 saurabh joined #gluster
03:51 itisravi joined #gluster
04:04 badone_ joined #gluster
04:04 shylesh joined #gluster
04:20 jporterfield joined #gluster
04:22 kdhananjay joined #gluster
04:32 ppai joined #gluster
04:34 vpshastry joined #gluster
04:40 jporterfield joined #gluster
04:47 ndarshan joined #gluster
04:50 hagarth joined #gluster
04:55 davinder joined #gluster
04:56 RameshN joined #gluster
04:57 mattapperson joined #gluster
04:57 rjoseph joined #gluster
04:58 dusmant joined #gluster
05:01 fidevo joined #gluster
05:06 MiteshShah joined #gluster
05:09 jporterfield joined #gluster
05:10 kanagaraj joined #gluster
05:15 pingitypong joined #gluster
05:27 jporterfield joined #gluster
05:29 bala joined #gluster
05:30 satheesh joined #gluster
05:30 davinder joined #gluster
05:31 prasanth joined #gluster
05:36 meghanam joined #gluster
05:36 MiteshShah joined #gluster
05:44 aravindavk joined #gluster
05:47 davinder joined #gluster
05:51 shubhendu joined #gluster
05:53 nshaikh joined #gluster
05:56 ababu joined #gluster
05:57 davinder joined #gluster
06:08 rastar joined #gluster
06:09 prasanth joined #gluster
06:17 jporterfield joined #gluster
06:20 kshlm joined #gluster
06:21 psharma joined #gluster
06:22 mohankumar joined #gluster
06:36 yk joined #gluster
06:38 satheesh joined #gluster
06:38 vimal joined #gluster
06:38 kaushal_ joined #gluster
06:48 primechu_ joined #gluster
06:51 overclk joined #gluster
06:59 hagarth joined #gluster
07:08 ajha joined #gluster
07:13 dusmant joined #gluster
07:19 jtux joined #gluster
07:20 CheRi joined #gluster
07:40 ekuric joined #gluster
07:43 ctria joined #gluster
07:44 lalatenduM joined #gluster
07:44 ngoswami joined #gluster
07:48 verdurin joined #gluster
07:48 marcoceppi_ joined #gluster
07:48 johnmwilliams joined #gluster
07:48 fyxim joined #gluster
07:48 flrichar joined #gluster
07:48 delhage joined #gluster
08:11 d-fence joined #gluster
08:15 jtux joined #gluster
08:25 kdhananjay joined #gluster
08:31 shylesh joined #gluster
08:40 mgebbe_ joined #gluster
08:43 hagarth rjoseph: ping
08:45 badone_ joined #gluster
08:59 rjoseph joined #gluster
09:00 rjoseph left #gluster
09:08 andreask joined #gluster
09:35 davinder joined #gluster
09:52 shubhendu joined #gluster
09:53 harish joined #gluster
09:55 aravindavk joined #gluster
09:57 inodb joined #gluster
09:57 ndarshan joined #gluster
09:58 ababu joined #gluster
09:59 inodb joined #gluster
09:59 jclift joined #gluster
09:59 RameshN joined #gluster
10:07 nshaikh joined #gluster
10:15 bala joined #gluster
10:16 psyl0n joined #gluster
10:23 X3NQ joined #gluster
10:40 F^nor joined #gluster
10:47 glusterbot New news from newglusterbugs: [Bug 1048786] quota+dht: dht is not fool proof from quota deem statfs in statfs call <https://bugzilla.redhat.co​m/show_bug.cgi?id=1048786>
11:11 aravindavk joined #gluster
11:14 Peanut_ joined #gluster
11:16 Peanut_ Hi folks, had a bit of scare here. I just did patches on half of my cluster: migrate the guests away, apt-get upgrade on the gluster machine, and then migrate guests back. But all of a sudden, the files in /gluster (storage for kvm guests) turned inaccessible (permission denied, even as root), and the guests I had migrated back all got into trouble.
11:19 Peanut_ Even as root on the rebooted cluster machine, I could read the directory, but could not access any of the gluster hosted files (permission denied). And now, magically, it works again.. any hints on how I could debug this?
11:19 social logs logs logs
11:19 social gluster usually screams quite a lot into logs if such case happens
11:20 social such case = something strange broken
11:22 Peanut_ Hmm... from dmesg: apparmor="DENIED" operation="open" parent=1943 profile="/usr/lib/libvirt/virt-aa-helper" name="/gluster/ns0.raw"
11:24 psyl0n joined #gluster
11:24 psyl0n joined #gluster
11:27 ndevos Peanut_: maybe virt-aa-helper (aa might be AppArmor?) was updated and does not know how to use fuse/glusterfs mounted volumes? you'll need to configure AppArmor for glusterfs, I guess (never used AppArmor though)
11:30 Peanut_ Ah, that's something interesting to look into.
11:30 tryggvil joined #gluster
11:44 ofu_ joined #gluster
11:44 samppah ndevos: do you know how gluster determines io prirotiy? i'm wondering performance.high-prio-threads etc tunable options
11:46 ndevos samppah: sorry, I've never looked at details about that
12:02 ndarshan joined #gluster
12:04 bfoster joined #gluster
12:08 mgebbe_ joined #gluster
12:18 CheRi joined #gluster
12:18 shubhendu joined #gluster
12:22 Peanut So what goes wrong is a live migrate. After migrating a host away and back again, I cannot read its contents in /gluster.
12:23 andreask joined #gluster
12:23 Peanut I can't even do something like 'md5sum /gluster/test.raw', permission denied (as root).
12:23 Peanut Rebooting the guest from within the guest does not help, virsh destroy/start does.
12:28 ppai joined #gluster
12:28 RameshN joined #gluster
12:29 davinder joined #gluster
12:37 kkeithley joined #gluster
12:43 eseyman joined #gluster
12:56 dhyan joined #gluster
12:56 vpshastry left #gluster
13:01 mohankumar joined #gluster
13:06 CheRi joined #gluster
13:10 pdrakeweb joined #gluster
13:11 tor joined #gluster
13:20 dusmant joined #gluster
13:21 hagarth joined #gluster
13:23 B21956 joined #gluster
13:27 getup- joined #gluster
13:34 dhyan joined #gluster
13:35 tryggvil joined #gluster
13:37 ira joined #gluster
13:40 eseyman joined #gluster
13:43 sroy joined #gluster
13:44 primechuck joined #gluster
13:47 lalatenduM joined #gluster
13:56 recidive joined #gluster
13:57 plarsen joined #gluster
13:57 xymox joined #gluster
13:58 mattappe_ joined #gluster
13:59 mattapperson joined #gluster
14:03 vpshastry joined #gluster
14:08 robo joined #gluster
14:12 vpshastry left #gluster
14:16 bennyturns joined #gluster
14:18 bharata-rao joined #gluster
14:19 mattappe_ joined #gluster
14:23 fuzzy_id joined #gluster
14:23 bala joined #gluster
14:29 jag3773 joined #gluster
14:30 johnmilton joined #gluster
14:34 fuzzy_id how do i investigate heal failures?
14:34 fuzzy_id i have a volume which is replicated over 3 bricks
14:35 fuzzy_id there are around 7 files which are listed as heal-failed in gluster volume heal … info heal-failed
14:36 jbrooks joined #gluster
14:36 bolazzles joined #gluster
14:40 aixsyd joined #gluster
14:40 davinder joined #gluster
14:41 aixsyd So - anyone figure out why self-heals max out at 120-130MB/s over Infiniband?
14:46 Peanut :q
14:46 Peanut Oops, srry :-)
14:47 jobewan joined #gluster
14:53 aixsyd JoeJulian: You around?
14:54 qdk joined #gluster
14:58 CheRi joined #gluster
15:02 robo joined #gluster
15:07 Peanut Does semiosis still hang out in here?
15:07 aixsyd usually
15:11 fuzzy_id ok
15:12 fuzzy_id i reduced the failing sealf-heals to the following log entry:
15:12 fuzzy_id E [afr-self-heal-entry.c:2296:afr​_sh_post_nonblocking_entry_cbk] 0-lidl-lead-vol-replicate-0: Non Blocking entrylks failed for <gfid:76…
15:13 fuzzy_id that is the directory in which files fail to heal
15:13 saurabh joined #gluster
15:13 bugs_ joined #gluster
15:14 fuzzy_id there seems nothing special about this directory
15:14 dbruhn joined #gluster
15:15 fuzzy_id the directory is actually on the third node
15:19 wushudoin joined #gluster
15:21 dbruhn JoeJulian, I had a rebalance fix layout kill my cluster a few weeks back, that being said I restarted it on Friday, and now am seeing a bunch of errors in the rebalance log on the new servers/bricks.
15:21 dbruhn Is this expected when it comes across stuff it was able to create, or should I be looking deeper into it?
15:23 diegows joined #gluster
15:24 hjmangalam joined #gluster
15:26 tryggvil joined #gluster
15:26 theron joined #gluster
15:32 daMaestro joined #gluster
15:43 theron joined #gluster
15:51 jclift left #gluster
15:53 Peanut I get "0-fuse: xlator does not implement release_cbk" in xlator/performance/open-behind.so - does this sound familiar to anyone?
15:57 plarsen joined #gluster
15:57 jclift joined #gluster
16:00 jclift left #gluster
16:01 zerick joined #gluster
16:03 bala joined #gluster
16:03 Technicool joined #gluster
16:07 kaptk2 joined #gluster
16:10 hagarth joined #gluster
16:12 dbruhn Peanut, are all of your versions matched?
16:15 dhyan joined #gluster
16:17 bgpepi joined #gluster
16:19 a_normal_student joined #gluster
16:19 Peanut dbruhn: no, I am updating my cluster - so one machine is now at 3.4.1, whereas the other is still at 3.4.0
16:20 dbruhn I am assuming you have an issue with versions not playing well together, correct the versions, restart the cluster, and everything should work
16:21 Ramereth joined #gluster
16:21 dylan_ joined #gluster
16:23 a_normal_student Hello, I've been having a bit of trouble setting up gluster on an ec2 instance on aws
16:24 a_normal_student I was hoping someone here would be willing to help be; I've been stuck on this one step for a while
16:25 Peanut dbruhn: I could try that - but as live migration is currently broken, that'd be a bit of downtime on services.
16:26 a_normal_student I am using gluster version 3.4.2 with centos 6.5, and have been following the getting started guide on the main site
16:27 dbruhn what step are you stuck on?
16:27 a_normal_student the issue occurs when setting up a gluster volume; it will say my host server is not in 'Peer in Cluster' state, but when I do probe status, it shows as such
16:28 dbruhn do you have iptables, and selinux turned off?
16:28 a_normal_student yes
16:28 dbruhn and are you using ipaddresses or hostnames?
16:28 a_normal_student i'm using elastic ips
16:29 dbruhn forgive my ignorance on AWS, are the elastic IP addresses not the external public internet addresses?
16:30 a_normal_student they are, yes
16:30 nage joined #gluster
16:30 nage joined #gluster
16:31 a_normal_student you can create allocate them and then associate the ip with one of your ec2 instances
16:31 dbruhn are your servers in different zones?
16:31 a_normal_student they are both in us-east, so the time zones should be the same
16:32 dbruhn can the servers talk to each other using internal IP addresses?
16:32 hagarth a_normal_student: do ensure that hostnames used in bricks during volume creating resolve on both nodes (it is better to use fqdn).
16:34 a_normal_student if you're referring to using the private ips, that doesn't seem to work, but they can both talk to each other with the public ips
16:35 a_normal_student @hagarth I'm not sure what that means; I'm very new with this
16:37 a_normal_student Oh, yes hagarth, they do
16:38 a_normal_student would there be an issue with the ports I have open?  I have 24008, 24008, and 49152-49164 open
16:39 a_normal_student *24007
16:40 a_normal_student they are open on both servers as well
16:45 sroy_ joined #gluster
16:45 semiosis [08:17] <Peanut> The issue is, live migration of libvirt-guests fails with the 3.4.1 packages, but works with 3.4.0. The issue is not within libvirt itself: doing a live migration makes /gluster/guest.raw completely unreadable (access denied) even as root.
16:46 semiosis [08:03] <Peanut> If you're around, could I ask for your assistance in an odd gluster problem that might be related to your Ubuntu packages?
16:46 semiosis Peanut: i can't imagine how that would be a packaging problem.  hopefully someone else more knowledgeable about libvirt migration can weigh in
16:47 Peanut Hi semiosis
16:48 semiosis hi
16:48 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:48 semiosis ha
16:48 samppah :O
16:48 Peanut It's a really odd issue. If I live-migrate, I lose access (permission denied) to a gluster-ed file, even if I simply try to cp or dd it.
16:49 Peanut Once I stop the (very broken) virtual machine, I get access to the file again, and it I can start it up just fine.
16:49 Peanut The problem is NOT with app-armor, which would have no influence on dd etc.
16:49 jclift joined #gluster
16:51 Peanut I've just patched the other server, and it now has the same problem: live migration makes accessing the file on the receiving cluster impossible.
16:51 samppah Peanut: did i understand correctly that this works fine with 3.4.0?
16:53 bennyturns jclift, here!
16:53 Peanut samppah: correct
16:55 Peanut It seems to be fuse that is throwing a fit, as access is denied to everyone, not just to libvirt.
16:55 jclift bennyturns: /me gives Empathy a dirty, dirty look
16:56 bennyturns :)
16:57 semiosis bug 1018793
16:57 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1018793 medium, unspecified, ---, vbellur, CLOSED CURRENTRELEASE, Saucy packaging problem
16:58 semiosis i removed the dependency on fuse-utils for saucy
16:58 semiosis Peanut: what release are you using?
16:58 Peanut semiosis: raring
16:59 semiosis hmm
16:59 semiosis doubt that's the issue then
16:59 Peanut fuse-utils is installed
17:01 semiosis right, it ought to be
17:04 a_normal_student this is the log file if this helps, although not sure if it has any clues as to what causes the not in 'Peer in Cluster' state issue: http://pastebin.com/3xQz9k9J
17:04 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
17:05 marcoceppi joined #gluster
17:05 dylan_ joined #gluster
17:07 pravka joined #gluster
17:08 rotbeard joined #gluster
17:08 Cenbe joined #gluster
17:09 Cenbe joined #gluster
17:10 johnbot11 joined #gluster
17:11 Peanut Could my problem be that both hosts are writing to the same file for a short time?
17:14 semiosis Peanut: that should not be a problem (or else it's a bug)
17:15 Peanut semiosis: I just tried with dd if=/dev/random of=/gluster/random bs=1024 count=100 on both machines at the same time, no problems.
17:15 semiosis right
17:15 semiosis apps could be locking the file, which dd alone doesn't afaik
17:16 semiosis that wouldnt be a bug tho, just how the app uses files
17:16 Peanut But can an app lock a file in sucuh a way that root cannot read the file anymore?
17:17 _Bryan_ joined #gluster
17:17 zapotah joined #gluster
17:18 LoudNoises joined #gluster
17:22 nueces joined #gluster
17:23 zapotah joined #gluster
17:23 zapotah joined #gluster
17:23 Mo_ joined #gluster
17:24 dylan_ joined #gluster
17:25 dbruhn semiosis, I have a lost an found directory on the root of one of my bricks, could that create a split brain error? as my bricks are directly on the root of the filesystems
17:25 dbruhn lost+found
17:32 failshell joined #gluster
17:33 xymox joined #gluster
17:34 semiosis dbruhn: unlikely
17:35 dbruhn thanks, I didn't think so but wanted to double check
17:43 jclift left #gluster
17:46 zapotah joined #gluster
17:48 brian_ joined #gluster
17:49 brian_ Hi, does anyone know much about running openstack and gluster?
18:03 jbrooks joined #gluster
18:04 SpeeR joined #gluster
18:10 japuzzo joined #gluster
18:14 japuzzo joined #gluster
18:15 japuzzo joined #gluster
18:15 dbruhn Ok, weird one, I have a directory that wasn't created during a fix layout, what do I do to manually create that directory, or is there a way to make the system correct this?
18:23 erik49 joined #gluster
18:29 pingitypong joined #gluster
18:32 a_normal_student is there a way you can download a previous version of gluster?  I cant seem to get 3.4.2 to work
18:36 zaitcev joined #gluster
18:40 kkeithley What linux dist?
18:44 plarsen joined #gluster
18:44 plarsen joined #gluster
18:45 davinder joined #gluster
18:46 dbruhn If I remember from earlier he is on cent 6.4/5
18:47 kkeithley Okay. I suppose it's safe to presume he got his bits from download.gluster.org. What's wrong with http://download.gluster.org/pub/gl​uster/glusterfs/3.4/3.4.1/CentOS/  ?
18:47 glusterbot Title: Index of /pub/gluster/glusterfs/3.4/3.4.1/CentOS (at download.gluster.org)
18:48 kkeithley If you used the YUM repo files from there, edit /etc/yum.repos.d/glusterfs-epel.repo
18:49 kkeithley Change every occurance of LATEST to 3.4/3.4.1
18:55 zapotah joined #gluster
18:55 zapotah joined #gluster
19:06 psyl0n joined #gluster
19:10 pingitypong joined #gluster
19:19 yosafbridge joined #gluster
19:27 cfeller joined #gluster
19:29 robo joined #gluster
19:46 tryggvil joined #gluster
20:02 pingitypong joined #gluster
20:02 a_normal_student I was able to get 3.4.2 to work with CentOS by changing the hosts file and putting the ips in the host file
20:03 a_normal_student happy its finally working; thanks for your time, guys
20:06 yosafbridge joined #gluster
20:07 Staples84 joined #gluster
20:10 robo joined #gluster
20:15 36DAB3S70 joined #gluster
20:17 primechu_ joined #gluster
20:27 aixsyd joined #gluster
20:27 aixsyd so has anyone managed to get faster heals over a 10gb IB pipe than 120-130 MB/s?
20:30 failshel_ joined #gluster
20:33 morsik msciciel_: ↑
20:35 aixsyd im just shocked that my HDD can write at 300MB/s, but heals stick at 120-130
20:35 morsik 300MB/s seq or random? :P
20:36 failshell joined #gluster
20:36 aixsyd random
20:37 morsik what about cpu usage... what processor, what about xfs formatting settings, etc, etc..
20:38 aixsyd hm. Know what? I think it may be the drives. you were right - that speed cam from a dd of /dev/null and /dev/zero
20:39 aixsyd ./dev/random and /dev/urandom shows 80MB/s
20:39 morsik so it's seq, not random
20:40 aixsyd how the hell, then, is a RAID10 getting 80MB/s?
20:40 morsik also, /dev/random and /dev/urandom are fuucking slow
20:40 morsik use sysbench
20:40 morsik aixsyd: i wonder how fast cpu do you have, because my /dev/urandom gets about 6M/s :D
20:40 morsik (check if=/dev/urandom of=/dev/null, it'll probably stuck at 80M/s too)
20:41 aixsyd sec
20:42 aixsyd wow, 14MB/s
20:42 morsik dd is worst thing for checking hdd speed :P
20:43 morsik i always divide dd result by 4
20:43 morsik to get "something about correct"
20:43 morsik (which is quite realistic)
20:43 aixsyd wow, so... why the hell is dd if=/dev/urandom of=/dev/null showing 14MB/s?
20:43 morsik aixsyd: because /dev/urandom is slow
20:43 morsik in dd i got 1.2G/s at raid6 with 24 hdds, and sysbench showed 280M/s random
20:43 morsik (so about 'divide by 4')
20:44 morsik anyway, use sysbench. really.
20:46 aixsyd er sysbench...
20:54 harish joined #gluster
20:55 dneary_ joined #gluster
21:01 aixsyd morsik: sysbench gives me this:
21:01 aixsyd FATAL: Cannot open file errno = 117 (�6���)
21:02 aixsyd sysbench --test=fileio --file-total-size=150G prepare  <-- with that command.
21:03 morsik bug.
21:03 morsik as google said.
21:03 aixsyd epic.
21:04 aixsyd seems to work well! :P
21:04 plarsen joined #gluster
21:08 jbrooks joined #gluster
21:08 aixsyd morsik: think my slow speeds could be the stripe seize on the RAID? its set to default 64K, RAID10
21:08 aixsyd *size
21:10 morsik possible
21:10 morsik 64K is soo low ;o
21:10 morsik default for controllers is about 256?
21:12 recidive joined #gluster
21:14 ikk joined #gluster
21:15 dbruhn if I am running a rebalance fix-layout will it cause the directories to be in split brain until it finishes?
21:16 semiosis doubt it
21:19 dbruhn have you ever seen an issue where the files and directories double in the filesystem?
21:27 tryggvil joined #gluster
21:34 andreask joined #gluster
21:36 tryggvil_ joined #gluster
21:39 jobewan joined #gluster
21:56 psyl0n joined #gluster
21:58 robo joined #gluster
22:04 eclectic joined #gluster
22:04 yinyin_ joined #gluster
22:04 mattappe_ joined #gluster
22:15 SpeeR is there a path of going from glusterfs to redhat storage without rebuilding, the base is centos 6.5 but the powers that be would like some sort of support
22:20 dbruhn SpeeR, seems like something you might want to talk about Redhat support about as part of your purchase.
22:20 SpeeR Yeh, I'm waiting for a call back from our acct manager
22:21 dbruhn What version are you on?
22:21 SpeeR was just thinking someone here might have gone through the same thing
22:21 SpeeR 3.4.1
22:21 dbruhn Current Redhat storage version is 3.3.1 I think
22:21 dbruhn you are also talking about an OS swap out.... I can't imagine anything less a fork lift
22:22 dbruhn How many servers are you running?
22:22 SpeeR hah yeh, at least the OS drives are separate
22:23 SpeeR I could probably remove each brick, and reinstall one at a time
22:23 dbruhn yep
22:23 SpeeR and copy the files to the new setup
22:31 pingitypong joined #gluster
22:35 johnbot11 joined #gluster
22:39 recidive joined #gluster
22:55 eryc joined #gluster
22:59 yinyin joined #gluster
23:13 pingitypong joined #gluster
23:19 ira joined #gluster
23:21 mattappe_ joined #gluster
23:40 mattappe_ joined #gluster
23:45 recidive joined #gluster
23:46 cyberbootje joined #gluster
23:48 mattappe_ joined #gluster
23:50 mattappe_ joined #gluster
23:50 harish_ joined #gluster
23:53 pingitypong joined #gluster
23:57 ^^rcaskey joined #gluster
23:57 shapemaker joined #gluster
23:57 mattappe_ joined #gluster
23:58 dbruhn__ joined #gluster
23:59 recidive joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary