Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 awheeler joined #gluster
00:11 kevein joined #gluster
00:22 ProT-0-TypE joined #gluster
00:32 asias joined #gluster
00:47 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
00:50 Shdwdrgn joined #gluster
01:01 harish joined #gluster
01:34 jporterfield joined #gluster
01:41 lpabon joined #gluster
01:51 harish joined #gluster
01:57 jporterfield joined #gluster
02:10 jporterfield joined #gluster
02:31 jporterfield joined #gluster
02:33 chirino joined #gluster
02:43 SunilVA joined #gluster
02:43 awheeler joined #gluster
02:45 awheeler joined #gluster
02:58 saurabh joined #gluster
03:03 awheeler joined #gluster
03:05 chirino joined #gluster
03:18 bharata-rao joined #gluster
03:21 jporterfield joined #gluster
03:25 shubhendu joined #gluster
03:32 jporterfield joined #gluster
03:35 shylesh joined #gluster
03:39 ruk joined #gluster
03:48 ruk joined #gluster
03:52 ruk Looking for some help diagnosing a sudden spike load on a Gluster service from ~1.0 to ~9.0 over the last 6 hours.  Gluster v3.3.1 on an EC2 server. A single brick mounted by ~25 clients. Has been running smoothly for months. Looking for pointers on where to look for problems.
04:11 bala joined #gluster
04:14 jporterfield joined #gluster
04:16 ppai joined #gluster
04:23 RobertLaptop joined #gluster
04:24 RobertLaptop joined #gluster
04:25 mohankumar__ joined #gluster
04:31 nshaikh joined #gluster
04:37 sac joined #gluster
04:37 saurabh joined #gluster
04:37 Humble joined #gluster
04:38 dusmant joined #gluster
04:38 shylesh joined #gluster
04:44 sac`away joined #gluster
04:46 itisravi joined #gluster
04:46 harish_ joined #gluster
04:55 davinder joined #gluster
04:57 dusmant joined #gluster
05:01 anands joined #gluster
05:06 sgowda joined #gluster
05:08 rastar joined #gluster
05:08 jporterfield joined #gluster
05:14 jporterfield joined #gluster
05:15 aravindavk joined #gluster
05:20 shylesh joined #gluster
05:27 vpshastry1 joined #gluster
05:27 CheRi joined #gluster
05:30 raghu joined #gluster
05:35 tryggvil joined #gluster
05:42 shruti joined #gluster
05:43 Cenbe joined #gluster
05:45 mmalesa joined #gluster
05:47 kanagaraj joined #gluster
05:48 glusterbot New news from newglusterbugs: [Bug 1002385] NFS filehandle size change from 3.2 results in stack corruption <http://goo.gl/LrmbXU>
05:49 RameshN joined #gluster
05:50 vshankar joined #gluster
05:54 shubhendu joined #gluster
05:57 guigui1 joined #gluster
05:57 bulde joined #gluster
05:58 spandit joined #gluster
06:01 hagarth joined #gluster
06:02 ProT-0-TypE joined #gluster
06:02 lalatenduM joined #gluster
06:02 ProT-0-TypE joined #gluster
06:04 satheesh joined #gluster
06:09 jtux joined #gluster
06:12 ricky-ticky joined #gluster
06:13 ProT-0-TypE joined #gluster
06:21 rwheeler joined #gluster
06:22 dusmant joined #gluster
06:30 ndarshan joined #gluster
06:34 vimal joined #gluster
06:39 davinder2 joined #gluster
06:46 ngoswami joined #gluster
06:48 glusterbot New news from newglusterbugs: [Bug 1002399] [RHS-RHOS] mkfs.ext4 hangs at "Creating journal" on cinder volume attached to an instance during rebalance with self-heal <http://goo.gl/2yBF2a>
06:51 wirewater joined #gluster
07:01 ctria joined #gluster
07:04 eseyman joined #gluster
07:09 _ndevos joined #gluster
07:10 ndevos joined #gluster
07:12 mohankumar__ joined #gluster
07:16 jtux joined #gluster
07:22 mmalesa joined #gluster
07:32 ProT-0-TypE joined #gluster
07:38 rwheeler joined #gluster
07:51 edward1 joined #gluster
07:51 jurrien_ joined #gluster
07:52 jurrien_ joined #gluster
07:52 jtux joined #gluster
08:02 sgowda joined #gluster
08:04 dusmant joined #gluster
08:06 the-me joined #gluster
08:10 mbukatov joined #gluster
08:12 ppai_ joined #gluster
08:12 mmalesa joined #gluster
08:13 mmalesa_ joined #gluster
08:15 jtux joined #gluster
08:17 StarBeast joined #gluster
08:25 spandit joined #gluster
08:26 sgowda joined #gluster
08:32 psharma joined #gluster
08:43 mgebbe_ joined #gluster
08:44 mgebbe__ joined #gluster
08:44 harish_ joined #gluster
08:44 mgebbe___ joined #gluster
08:45 mgebbe__ joined #gluster
08:53 jporterfield joined #gluster
08:58 pkoro joined #gluster
09:03 duerF joined #gluster
09:13 Cenbe joined #gluster
09:17 morse joined #gluster
09:25 dusmant joined #gluster
09:26 spider_fingers joined #gluster
09:28 36DABC3TU joined #gluster
09:29 20WACZA5W joined #gluster
09:32 mbukatov joined #gluster
09:38 vpshastry1 left #gluster
09:38 vpshastry1 joined #gluster
09:48 samsamm joined #gluster
09:50 rjoseph joined #gluster
09:51 rjoseph joined #gluster
09:54 ninkotech__ joined #gluster
10:24 shruti joined #gluster
10:27 shubhendu joined #gluster
10:27 ndarshan joined #gluster
10:28 aravindavk joined #gluster
10:28 kanagaraj joined #gluster
10:29 RameshN joined #gluster
10:36 spandit joined #gluster
10:37 ricky-ticky joined #gluster
10:38 hagarth @channelstats
10:38 glusterbot hagarth: On #gluster there have been 175937 messages, containing 7414842 characters, 1238560 words, 4941 smileys, and 658 frowns; 1078 of those messages were ACTIONs. There have been 67861 joins, 2116 parts, 65725 quits, 23 kicks, 166 mode changes, and 7 topic changes. There are currently 226 users and the channel has peaked at 236 users.
10:39 rastar joined #gluster
10:48 kanagaraj joined #gluster
10:48 shubhendu joined #gluster
10:48 ndarshan joined #gluster
10:53 shruti joined #gluster
10:55 RameshN joined #gluster
10:56 dusmant joined #gluster
11:05 B21956 joined #gluster
11:14 kkeithley1 joined #gluster
11:19 glusterbot New news from newglusterbugs: [Bug 1002511] File operations caused smbd process to crash <http://goo.gl/zXiMS8>
11:20 satheesh1 joined #gluster
11:24 CheRi joined #gluster
11:25 ndarshan joined #gluster
11:27 jporterfield joined #gluster
11:30 manik1 joined #gluster
11:35 plarsen joined #gluster
11:39 hagarth joined #gluster
11:40 piotrektt joined #gluster
11:42 failshell joined #gluster
11:44 rgustafs joined #gluster
11:50 jporterfield joined #gluster
11:52 dusmant joined #gluster
11:59 NuxRo hi, anyone got gluster working with xenserver?
12:08 ujjain joined #gluster
12:09 RameshN joined #gluster
12:17 ndarshan joined #gluster
12:19 CheRi joined #gluster
12:21 plarsen joined #gluster
12:23 JoeJulian NuxRo: Yep, I've seen lots of people using gluster with xenserver.
12:27 NuxRo JoeJulian: cheers, found some google results as well, but looks like people use it via NFS, was hoping for a "native connector"
12:29 JoeJulian I've not used xenserver. Does it have fuse? Can you install software on it?
12:29 JoeJulian Theoretically if you can install software on it, you could use the native api via qemu.
12:33 nshaikh left #gluster
12:36 jmsa joined #gluster
12:40 mooperd joined #gluster
12:47 awheeler joined #gluster
12:48 awheeler joined #gluster
12:48 CheRi joined #gluster
12:49 bennyturns joined #gluster
12:49 glusterbot New news from newglusterbugs: [Bug 1000779] running add-brick then remove-brick, then restarting gluster leads to broken volume brick counts <http://goo.gl/0QXYbT>
12:56 robo joined #gluster
12:57 bulde pu771
12:57 bulde wrong window
13:03 rcheleguini joined #gluster
13:03 ProT-0-TypE joined #gluster
13:06 bulde1 joined #gluster
13:09 dblack joined #gluster
13:19 glusterbot New news from newglusterbugs: [Bug 1002556] running add-brick then remove-brick, then restarting gluster leads to broken volume brick counts <http://goo.gl/YqOYSj> || [Bug 1002577] Add brick operation is causing one of the smbd process in server to crash <http://goo.gl/LFErbk>
13:21 guigui1 joined #gluster
13:21 glusterbot New news from resolvedglusterbugs: [Bug 1000779] running add-brick then remove-brick, then restarting gluster leads to broken volume brick counts <http://goo.gl/0QXYbT>
13:31 B219561 joined #gluster
13:32 vpshastry1 left #gluster
13:32 bala joined #gluster
13:32 dewey joined #gluster
13:39 manik joined #gluster
13:39 johnmorr joined #gluster
13:45 dusmant joined #gluster
13:45 samsamm joined #gluster
13:46 kaptk2 joined #gluster
13:50 bugs_ joined #gluster
13:57 B21956 joined #gluster
13:58 MrNaviPacho joined #gluster
14:02 robo joined #gluster
14:05 chirino joined #gluster
14:05 mooperd joined #gluster
14:06 wushudoin joined #gluster
14:10 mmalesa joined #gluster
14:11 jclift_ joined #gluster
14:12 hagarth joined #gluster
14:16 JuanBre joined #gluster
14:20 rwheeler joined #gluster
14:22 bulde joined #gluster
14:24 asias joined #gluster
14:25 wrale joined #gluster
14:26 vmos hello, I'm trying to mount gluster via nfs with a specific uid, something like this
14:26 vmos 172.1.1.15:/voldata  /data   nfs     nfsvers=3,hard,intr,uid=2000,gid=2000,auto 0 0
14:27 vmos now that worked (except for a very specific issue I'm tryig to address) before I added the uid, now it doesn't mount at all, am I barking up the wrong tree?
14:28 wrale is there anyway to natively encrypt data at rest with glusterfs (especially it's s3-like api)?
14:28 Guest53741 joined #gluster
14:32 jdarcy joined #gluster
14:33 JoeJulian vmos: does it error?
14:34 JoeJulian wrale: that feature's not yet included with GlusterFS. Perhaps see ,,(hekafs) though.
14:34 glusterbot wrale: CloudFS is now HekaFS. See http://hekafs.org or https://fedoraproject.org/wiki/Features/CloudFS
14:34 wrale JoeJulian: thank you.. i'll take a look
14:35 vmos JoeJulian: there is nothing in the gluster logs at all and I can't see anything in the other logs mentioning nfs or gluster
14:36 JoeJulian nfs is a kernel based client. There wouldn't be anything in a gluster log.
14:37 vmos didn't think so but I would have expected something in dmesg or ssylog
14:37 dbruhn joined #gluster
14:41 kkeithley_ data-at-rest never made it into HekaFS either
14:41 JoeJulian ah
14:42 kkeithley_ At the 3.5 planning meeting yesterday (on freenode:#gluster-dev) it was decide that it's a Nice To Have in 3.5.
14:42 kkeithley_ s/decide/decided/
14:42 glusterbot kkeithley_: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
14:43 kkeithley_ good job glusterbot
14:43 JoeJulian I wonder why that history lags so badly... :/
14:44 ndevos oh, I always assumed it was because of braces or things
14:44 ndevos s/things/other things/
14:44 glusterbot What ndevos meant to say was: oh, I always assumed it was because of braces or other things
14:44 ndevos see?
14:44 * JoeJulian shrugs
14:44 JoeJulian I'll look into it someday... ;)
14:45 ndevos s/someday/(probably) never/
14:45 glusterbot ndevos: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
14:45 ndevos :P
14:45 JoeJulian hehe
14:49 tryggvil joined #gluster
14:49 wrale this link says that hekafs is encrypted even on disk : http://hekafs.org/
14:49 glusterbot Title: HekaFS (at hekafs.org)
14:51 daMaestro joined #gluster
14:54 robo joined #gluster
14:54 sprachgenerator joined #gluster
14:55 kkeithley_ yes, it does say that.
14:55 robo joined #gluster
14:56 sprachgenerator joined #gluster
14:57 kkeithley_ The cake is a lie too.
15:02 robo joined #gluster
15:09 ricky-ticky joined #gluster
15:17 gkleiman joined #gluster
15:18 zerick joined #gluster
15:20 nullck joined #gluster
15:26 wrale kkeithley: that saves me much time.. i appreciate that :)
15:32 JoeJulian jdarcy: honestly, as long as you're sure, I'm satisfied. I just don't want a repeat of my drbd failure.
15:34 mmalesa joined #gluster
15:35 mmalesa joined #gluster
15:35 jdarcy JoeJulian: Quite understandable.
15:36 JoeJulian you understand what you're doing a hell of a lot better than I ever will. :D
15:38 LoudNoises joined #gluster
15:38 jdarcy JoeJulian: Only from a developer's perspective, though.  It's folks like you who really feel the pain when I screw up, and I try to respect that.
15:38 jdarcy kkeithley: I thought we did have on-disk encryption in HekaFS, just not very strong (i.e. the stuff from before Edward).
15:40 kkeithley_ oh, did we? (and is weak encryption even interesting?)
15:40 kkeithley_ I was just thinking about the fact that we didn't have Edward's work in.
15:40 kkeithley_ Double ROT13, just to be sure
15:41 rastar joined #gluster
15:42 JoeJulian lol
15:44 jdarcy kkeithley: IIRC there are small known vulnerabilities, but already better than much of what's on the market.
15:45 jdarcy kkeithley: For example, it explicitly avoids some known and common problems around chosen plaintext and bit tampering.
15:46 jdarcy We all use the same ciphers, but *how* we use them matters a lot.
15:47 nightwalk joined #gluster
15:47 * jdarcy classifies just about all current use of crypto as "not very strong"
15:47 wrale :)
15:48 wrale quantum crypto over here.. nsa just tapped it, though.. collapsed my quantum state
15:49 jdarcy I'm using quantum crypto over here, so . . . don't look at it!  Dammit.
15:49 wrale lol
16:02 robo joined #gluster
16:09 MrNaviPacho joined #gluster
16:13 tziOm joined #gluster
16:15 Guest53741 joined #gluster
16:20 glusterbot New news from newglusterbugs: [Bug 996768] glusterfs-fuse3.4.0 mount option for read-only not functional on rhel 5.9 <http://goo.gl/hmOKdK>
16:24 zaitcev joined #gluster
16:26 NuxRo JoeJulian: xenserver has a centos 5.x based underlying management domain (dom0), theoretically can do fuse, the hard part is making xenserver add a data store of "glusterfs" type, not NFS
16:27 NuxRo but even nfs might be aceptable, cheers
16:31 MrNaviPacho joined #gluster
16:32 jporterfield joined #gluster
16:38 Mo_ joined #gluster
16:40 robos joined #gluster
16:42 dbruhn joined #gluster
16:47 B21956 left #gluster
16:50 mohankumar__ joined #gluster
16:54 glusterbot New news from resolvedglusterbugs: [Bug 980770] GlusterFS native client fails to mount a volume read-only <http://goo.gl/nTFRU> || [Bug 996768] glusterfs-fuse3.4.0 mount option for read-only not functional on rhel 5.9 <http://goo.gl/hmOKdK>
16:58 rotbeard joined #gluster
17:00 MrNaviPacho joined #gluster
17:01 aliguori joined #gluster
17:03 bulde joined #gluster
17:11 jporterfield joined #gluster
17:14 mmalesa joined #gluster
17:16 morsik joined #gluster
17:18 marbu joined #gluster
17:21 nightwalk joined #gluster
17:25 MrNaviPacho joined #gluster
17:31 robos joined #gluster
17:39 lalatenduM joined #gluster
17:52 durzo joined #gluster
17:52 durzo semiosis, are you around? i just upgraded to 3.4 using your debs to discover there is no more upstart job file.. where do i find it from?
17:56 failshell joined #gluster
18:00 MrNaviPacho joined #gluster
18:00 gkleiman joined #gluster
18:05 bennyturns joined #gluster
18:06 MrNaviPacho joined #gluster
18:09 mmalesa joined #gluster
18:09 [o__o] left #gluster
18:11 [o__o] joined #gluster
18:12 morsik joined #gluster
18:16 dewey joined #gluster
18:26 bulde joined #gluster
18:31 sprachgenerator joined #gluster
18:54 glusterbot New news from resolvedglusterbugs: [Bug 893778] Gluster 3.3.1 NFS service died after writing bunch of data <http://goo.gl/NLoE3>
19:05 RedShift joined #gluster
19:06 RedShift hi all
19:06 RedShift is gluster capable of storing Vmware ESXI datastores?
19:07 RedShift I'm going through this document (http://www.gluster.org/community/documentati​on/index.php/Basic_Gluster_Troubleshooting) which says gluster is not suited for hosting databases
19:07 glusterbot <http://goo.gl/7m2Ln> (at www.gluster.org)
19:07 RedShift that makes me worry about vmware datastores
19:11 JoeJulian RedShift: It's not suited to high volume databases or databases with multiple caching servers. I run a single mysql server with an innodb backend on a replica 3 volume. I've never had any problems with it.
19:12 * JoeJulian wants to find some time to do some performance testing for that.
19:12 RedShift hmm
19:25 lpabon joined #gluster
19:30 manik joined #gluster
19:30 manik joined #gluster
19:37 jskinner_ joined #gluster
20:14 manik1 joined #gluster
20:22 B21956 joined #gluster
20:22 voronaam joined #gluster
20:23 voronaam Hi! I was wondering, with Gluster 3.4 and a replicated volume (replica factor = 2) do I still have to grow cluster by two bricks at a time?
20:24 voronaam As it is said in http://gluster.org/community/documentation​/index.php/Gluster_3.2:_Expanding_Volumes
20:24 glusterbot <http://goo.gl/1YvRY> (at gluster.org)
20:24 JoeJulian Of course
20:24 morsik joined #gluster
20:26 JoeJulian When adding replicas, you have to add enough bricks to make a replica.
20:26 voronaam And do I have to specify "replica 2" in the add-brick command?
20:26 cicero no
20:26 voronaam Ok, thanks!
20:26 JoeJulian Not if you're already at replica 2
20:27 voronaam Oh, so I can change that number? Say I have 2 nodes, RF = 2. Can I add 4 more nodes and set RF to 3?
20:27 JoeJulian If you're changing your replica count, that's when you can specify.
20:27 JoeJulian You can add all the client nodes you want, but you'll have to add enough server nodes to create a multiple of your replica count.
20:28 voronaam I was talking about server nodes. Ok, thanks, I think I got it
20:28 JoeJulian We usually just drop the word "node" and call them clients and servers.
20:28 JoeJulian Technically, even a printer is a network node...
20:29 voronaam One more question. Do I have to rebalance? I am not worried about FS performance being balanced at the moment. If I do not rebalance, will it balance itself eventually? (as we put more and more data there)
20:29 JoeJulian If you're adding bricks, you'll have to at least do a rebalance..fix-layout
20:30 JoeJulian And balance is managed using a predictive hashing algorithm based on the filename, so it's not "really" going to end up balanced in the long run.
20:30 voronaam Thank you. I need to read about rebalancing more
20:31 JoeJulian You might learn some about distributed hash translation from http://joejulian.name/blog​/dht-misses-are-expensive/
20:31 glusterbot <http://goo.gl/A3mCk> (at joejulian.name)
20:31 duerF joined #gluster
20:31 JoeJulian .... I should have made that it's own article...
20:32 voronaam I've been reading your blog already. Couple of things felt like they should be part of the official documentation
20:32 voronaam Extended attributes on the directories after gluster volume delete, for example
20:33 morsik joined #gluster
20:33 JoeJulian even better would be for a cli method of resetting a brick.
20:35 voronaam Since I have a chance to talk to you, do you know what is the proper way of adding extra translators at the moment?
20:35 JoeJulian @meh
20:35 glusterbot JoeJulian: I'm not happy about it either
20:36 voronaam It is just in my use case I may need to write my own translator at some point :)
20:36 JoeJulian If you're referring to custom translators, there's a "filter" directory that allows you to alter the vol files when the volume is change through the cli.
20:36 voronaam Awesome! That is what I was looking for!
20:37 cicero hmm
20:37 cicero @meh
20:37 glusterbot cicero: I'm not happy about it either
20:37 cicero nice.
20:37 voronaam Where that would be? Is it in /etc/gluster ?
20:37 voronaam Or do I need to create a custom build?
20:37 JoeJulian And if it's "at some point" then you're in better luck. http://www.gluster.org/community/d​ocumentation/index.php/Features/Ea​sy_addition_of_custom_translators
20:37 glusterbot <http://goo.gl/o4WgkL> (at www.gluster.org)
20:39 JoeJulian It's on the "nice to have" list for 3.5
20:39 voronaam Hm, December 2 for the release date is awesome
20:40 voronaam I will wait till then. Thank you for pointing this out. I have subscribed to the list, but did not have enough time to read it in full
20:40 JoeJulian I know what that's like... :D
20:40 JoeJulian ... so ... what's your custom translator going to do?
20:41 voronaam I need to maintain an audit trail
20:41 voronaam That is to keep track of who modified which file and when. And I will need to keep MD5 sums of each file
20:42 voronaam I will either prepend it to the actual file content or keep it in separate location :)
20:42 JoeJulian Interesting...
20:42 JoeJulian Do add it to the ,,(forge)
20:42 glusterbot JoeJulian: Error: No factoid matches that key.
20:42 JoeJulian what???!?!?!!!
20:43 JoeJulian @learn forge as http://forge.gluster.org
20:43 glusterbot JoeJulian: The operation succeeded.
20:44 voronaam That is still being in the design phase. I will be happy with some application doing that instead of the FS. But since we have several applications modifying the files...
20:44 voronaam If I end up writing this at the end, I will open source it for sure.
20:44 JoeJulian @hack
20:44 glusterbot JoeJulian: The Development Work Flow is at http://goo.gl/ynw7f
20:44 JoeJulian Some useful stuff there.
20:45 voronaam Thank you
20:47 JuanBre just a question...the "gluster volume heal <volume> info split-brain" actually shows a log or the current files in a split-brain situation?
20:47 JoeJulian log of split-brains detected.
20:48 JoeJulian The only way I've found to clear that is to restart all glusterd.
20:48 JuanBre how can I clear it?
20:50 morsik joined #gluster
20:50 B21956 left #gluster
20:53 JuanBre if a file is "split-brained", it should affect the "trusted.afr.*" attribute, or I am completely wrong?
20:57 andreask joined #gluster
21:24 JoeJulian JuanBre: Usually that's true, but not necessarily always.
21:25 JoeJulian If those are all 0, but something else is different between the replicas, different size, owner, etc., that can be seen as split-brain as well.
21:27 JuanBre JoeJulian: in which situation can both files with all 0's at trusted.afr and have different sizes or anything?
21:28 JoeJulian JuanBre: Shouldn't be possible.
21:28 JoeJulian Unless, maybe, there's something wrong with your brick filesystem or someone wrote directly to a brick?
21:31 JuanBre JoeJulian: Ok. I just wanted to know if I can use your script http://joejulian.name/blog/quick-and-d​irty-python-script-to-check-the-dirty-​status-of-files-in-a-glusterfs-brick/ to verify if I still have split-brain problems
21:31 glusterbot <http://goo.gl/grHFn> (at joejulian.name)
21:32 JoeJulian Yeah, that should be safe to run.
21:33 mmalesa joined #gluster
21:35 JuanBre Thanks again!
21:54 jporterfield joined #gluster
21:56 mmalesa joined #gluster
22:01 dbruhn joined #gluster
22:03 nueces joined #gluster
22:07 robo joined #gluster
22:55 glusterbot New news from resolvedglusterbugs: [Bug 968301] improvement in log message for self-heal failure on file/dir in fuse mount logs <http://goo.gl/GI3SX>
22:57 awheele__ joined #gluster
23:31 jporterfield joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary