Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 y4m4 joined #gluster
00:06 yinyin joined #gluster
00:13 sjoeboo_ joined #gluster
00:29 nocterro joined #gluster
01:28 kevein joined #gluster
01:32 sjoeboo_ joined #gluster
01:35 fidevo joined #gluster
01:38 ehg joined #gluster
01:56 nueces joined #gluster
02:15 portante|afk joined #gluster
02:15 kkeithley joined #gluster
02:52 saurabh joined #gluster
03:17 badone joined #gluster
03:25 jskinner_ joined #gluster
03:29 glusterbot New news from resolvedglusterbugs: [Bug 949916] [Feature] GFID based auxillary mount point support <http://goo.gl/xLal4> || [Bug 949917] [Feature] GFID based auxillary mount point support <http://goo.gl/Asznu>
03:33 glusterbot New news from newglusterbugs: [Bug 949914] [Feature] GFID based auxillary mount point support <http://goo.gl/zFDC4>
03:45 itisravi joined #gluster
03:47 bulde joined #gluster
04:20 sjoeboo_ joined #gluster
04:21 tshm__ joined #gluster
04:26 mohankumar joined #gluster
04:27 Susant joined #gluster
04:31 shylesh joined #gluster
04:32 bala1 joined #gluster
04:40 bulde joined #gluster
04:47 Susant joined #gluster
04:56 aravindavk joined #gluster
04:59 sgowda joined #gluster
05:07 bala1 joined #gluster
05:10 hagarth joined #gluster
05:15 bala1 joined #gluster
05:30 raghu joined #gluster
05:32 bulde joined #gluster
05:39 lalatenduM joined #gluster
05:49 vshankar joined #gluster
05:52 vpshastry joined #gluster
06:00 sjoeboo_ joined #gluster
06:01 ramkrsna joined #gluster
06:01 ramkrsna joined #gluster
06:04 glusterbot New news from newglusterbugs: [Bug 959069] A single brick down of a dist-rep volume results in geo-rep session "faulty" <http://goo.gl/eaoet>
06:08 rotbeard joined #gluster
06:16 renihs joined #gluster
06:23 ctria joined #gluster
06:23 jtux joined #gluster
06:23 rgustafs joined #gluster
06:30 guigui1 joined #gluster
06:31 vimal joined #gluster
06:33 deepakcs joined #gluster
06:33 15SABE3JA joined #gluster
06:34 glusterbot New news from newglusterbugs: [Bug 959075] dht migration- open not sent on a cached subvol if open done on different fd once cached changes <http://goo.gl/Esx2G>
06:42 jikz__ joined #gluster
06:57 jtux joined #gluster
07:16 hybrid512 joined #gluster
07:21 andreask joined #gluster
07:26 pithagorians joined #gluster
07:34 zwu joined #gluster
07:40 jtux joined #gluster
07:50 guigui1 joined #gluster
08:01 ChikuLinu__ joined #gluster
08:03 helloadam joined #gluster
08:03 tjstansell1 joined #gluster
08:04 elyograg joined #gluster
08:04 hybrid512 joined #gluster
08:05 tshm__ left #gluster
08:09 chirino joined #gluster
08:16 andreask joined #gluster
08:22 lcligny joined #gluster
08:22 lcligny Hi there !
08:23 ngoswami joined #gluster
08:24 lcligny Maybe someone can help me on a small issue with log rotation
08:25 lcligny I try to rotate logs with the command I found in the documentation: "gluster volume log rotate <VOLUME>"
08:26 lcligny but it doesn't rotate the logs that are in /var/log/glusterfs/bricks
08:26 lcligny I use gluster 3.3.1
08:27 rastar joined #gluster
08:30 gbrand_ joined #gluster
08:37 rgustafs joined #gluster
08:38 deepakcs joined #gluster
08:48 tshm_ joined #gluster
08:49 lcligny even if I add the value for brick "gluster volume log rotate Record x.x.x.x:/data/record" , it won't rotate
08:59 lh joined #gluster
09:10 JoeJulian lcligny: Many of us just use logrotate with copytruncate.
09:11 ilbot_bck joined #gluster
09:11 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
09:18 lh joined #gluster
09:23 kevein joined #gluster
09:25 lcligny JoeJulian: so my problem with built-in logrotate is a know issue ? Thanks I will try what you suggested
09:40 mika joined #gluster
09:46 rastar joined #gluster
09:57 E-T joined #gluster
10:06 vpshastry1 joined #gluster
10:08 satheesh joined #gluster
10:12 E-T Happy $LOCALTIME everyone. I wonder if there is a way preventing the native client from writing to nodes/bricks that just went offline (distributed only environment). Maybe by using a translator ? By default parts of the data will not get written (when reaching the offline brick) leaving back an inconsistent data  structure.
10:13 sgowda joined #gluster
10:19 pithagorians anybody knows why clients trying to connect to server are disconnected ? log into https://gist.github.com/anonymous/5508362
10:19 glusterbot Title: gist:5508362 (at gist.github.com)
10:28 mgebbe_ joined #gluster
10:43 edward joined #gluster
10:45 manik joined #gluster
10:56 jtux joined #gluster
11:06 vpshastry1 joined #gluster
11:17 guigui3 joined #gluster
11:18 31NAAIJPW joined #gluster
11:18 guigui3 left #gluster
11:18 tshm_ Hi! Can anybody explain this to me? For reasons of minimizing downtime during an upgrade,, I'm looking at a Gluster setup where, temporarily, the bricks run Gluster v3.2.7 and the client v3.1.3. In this case, if I delete files when the brick where those files resides is offline for one reason or the other, the files will also disappear from that brick once it comes online again. However,...
11:18 tshm_ ...when the client runs v3.2.7 as well, that is not the case and the "deleted" files will remain on the brick.
11:19 guigui1 joined #gluster
11:20 tshm_ And, as if that wasn't enough, if you do an rm on the 3.2.7 client trying to remove a file that was already deleted before, it will say "No such file or directory", but the file will be removed from the brick that used to be offline before.
11:23 zykure joined #gluster
11:25 tshm_ Actually, it will disappear from the brick in question whether you try to access the file with rm, less, cat... but not until that.
11:31 rcheleguini joined #gluster
11:35 andrewjsledge joined #gluster
11:53 samppah @latest
11:53 glusterbot samppah: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
11:53 jf001 joined #gluster
11:54 the-me semiosis: JoeJulian: johnmark: I already saw http://bugs.debian.org/cgi-b​in/bugreport.cgi?bug=706625 just had not got the time to test it, yet. but more critical is http://bugs.debian.org/cgi-bi​n/bugreport.cgi?bug=698502#47
11:54 glusterbot <http://goo.gl/372qT> (at bugs.debian.org)
12:00 sjoeboo_ joined #gluster
12:07 andreask joined #gluster
12:12 jf001 having some issues with glusterfs 3.3.1.  When I mount a dir with nfs and do mkdir test, I get ~ 100 dirs all named 'test'
12:20 cekstam joined #gluster
12:27 sjoeboo_ joined #gluster
12:29 sjoeboo_ joined #gluster
12:35 sander^work joined #gluster
12:38 sander^work Do anyone know common rasons  lsyncd have delays when transfering files?
12:38 sander^work I read default sync time is 15 seconds, but its taking way longer than that.
12:40 plarsen joined #gluster
12:43 yinyin joined #gluster
12:43 plarsen joined #gluster
12:52 nickw joined #gluster
12:57 JoeJulian lcligny: Don't put words in my mouth. I never said anything about known issues, I simply stated a preference.
12:58 JoeJulian E-T: Not sure what you're seeing, but the expected behavior when a single brick in a distribute set is offline is that when a file that's on that brick (or predictively on that brick according to the filename's hash) is accessed, it should error.
12:59 nickw morning, guys
13:00 JoeJulian pithagorians: Because the server is not responding within 42 seconds. Check your server's load, your network, etc.
13:00 JoeJulian tshm_: Can't mix 3.X and 3.Y.
13:02 pithagorians <JoeJulian> i have 2 servers working in replica mode. the one you see in the https://gist.github.com/anonymous/5508362 is second server. can't understand why clients are not addressing to the first server
13:02 tshm_ Everywhere I've read, it says 3.2 is backwards compatible with 3.1, although 3.3 isn't.    And they do work together. I can access files and replication works and all that.
13:02 glusterbot Title: gist:5508362 (at gist.github.com)
13:03 tshm_ It's just that the behaviour is a little bit different depending on setup.
13:04 JoeJulian jf001: Reformat your bricks without ext[34].
13:04 tshm_ How about the second behaviour I described?    That is a setup entirely in v3.2. You delete a file while one brick is offline, online that brick, and the files is still present in the brick - until you try accessing or deleting it
13:05 pithagorians <JoeJulian> even i mount the nfs partition like https://gist.github.com/anonymous/5508999
13:05 glusterbot Title: gist:5508999 (at gist.github.com)
13:06 JoeJulian tshm_: That's fixed in 3.3
13:06 tshm_ Oh, so you're saying that's a bug?
13:06 JoeJulian More or less. An implementation deficiency.
13:07 JoeJulian ~mount server | pithagorians
13:07 glusterbot pithagorians: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
13:07 tshm_ That sounds better. ;-)   The issue here is whether to upgrade to 3.2 or 3.3,  and since we read everywhere that an upgrade to 3.2 can be done without (brick) downtime, that still seemed like an option.
13:10 17WABIIK8 joined #gluster
13:10 pithagorians <JoeJulian> my point is i get the error in the log, claiming that client can't  connect to the storage2, when storage2 has cpu overloaded. why it doesn't connect to the storage1, which is not cpu overloaded?
13:10 JoeJulian The game-changer for me which prompted me to go from 3.1 to 3.3 was the non-blocking self-heal.
13:11 JoeJulian pithagorians: It is. It's just telling you about the timeout.
13:11 JoeJulian During that 42 seconds, it can't do self-heal checks or writes.
13:12 JoeJulian ... and holy crap! You're using that much cpu for that long? :o
13:12 guigui3 joined #gluster
13:12 tshm_ JoeJulian: That is one of the issues we're considering in the decision making! Thanks for input.
13:13 lh JoeJulian, ping, see /msg
13:14 bala joined #gluster
13:14 E-T JoeJulian: Thats true - of course when a brick is down its data will not be available. i wonder if you can prevent writing errors.
13:15 pithagorians <JoeJulian> the cpu of storages are strong. in normal working status gluster eats about 10 % of cpu. and i have there some php scripts working that uses 100 % of one core. also i have a 3.9 TB of data on gluster partition. the hdd is 4.5 TB. can it affect the gluster performance ?
13:26 manik joined #gluster
13:30 hagarth joined #gluster
13:32 xymox joined #gluster
13:32 jf001 JoeJulian: what should I use instead then? xfs?
13:34 JoeJulian E-T: But if you did work around the brick being missing, wrote foo.bar, then the brick came back and lo and behold, it already had foo.bar, then what?
13:35 JoeJulian jf001: sure. xfs is the recommended filesystem this week. :D
13:35 jf001 alright, thanks
13:35 E-T JoeJulian: I see your point. But in my case this could not happen (just writing Data that can only exist once). A possible solution would be writing an translator for the client right ?
13:38 JoeJulian E-T: I /think/ so... that might actually be a good question for the gluster-devel mailing list. I'll double check the dht translator to see if there's a flag to allow that behavior as well.
13:40 sjoeboo_ joined #gluster
13:40 piotrektt joined #gluster
13:41 E-T JoeJulian: Thank you! Unfortunately there is not too much information on the translators available out there - but they seem to very mighty :) as far as I know it is possible to do checks on df, open files and so on per brick. Checking if a brick is writeable should be manageable somehow. I just could the missing part maybe.
13:43 JoeJulian E-T: Nope, there's a very clear code path and there's no flags to bypass that.
13:44 JoeJulian You can check which brick a file exists on. You can even specify using some special filename which brick to create a file on.
13:45 E-T JoeJulian: So writing an own translator - for this scenario -would be the solution ....
13:45 JoeJulian You can use remove-brick to rebalance the files off the offline brick during the downtime. I recognize that may not be an option though.
13:45 aliguori joined #gluster
13:46 vpshastry joined #gluster
13:46 JoeJulian I don't know enough about writing translators to answer that definitively.
13:47 E-T well - in all cases where a brick goes down for maintenance this would be the way to do it. I am searching for a solution on bricks that expectantly go down – and maybe addressing the problem might take hours.
13:49 E-T funny nobody came up with this problem so far. but I guess that’s because the best practice is building a  distributed-replicated cluster 
13:56 aliguori_ joined #gluster
13:56 Rocky__ left #gluster
13:57 itisravi joined #gluster
13:57 JoeJulian Right, resiliency is expected to be replication.
13:58 wushudoin joined #gluster
13:58 E-T jep - but with 400 TB thats quite a cost :)
14:00 JoeJulian About US$15000 for commodity drives.
14:01 JoeJulian But yeah, it's all about trade-offs isn't it.
14:01 JoeJulian What's your application, if I may ask?
14:02 E-T it is - it is - unfortunately ^^
14:02 E-T video streaming
14:02 E-T we store tons (well TB) of video data
14:03 E-T what i need is a cheap scale-out storage solution for online archiving
14:04 E-T that’s why data not being available is ok - but not storing it is kind of an disaster
14:05 jruggiero joined #gluster
14:06 jruggiero joined #gluster
14:07 jruggiero joined #gluster
14:07 jruggiero left #gluster
14:08 JoeJulian Can't you try a filename, if it fails, try another one? I assume that you'll have enough bricks to make two filenames hashing to the same brick fairly unlikely.
14:08 portante|ltp joined #gluster
14:09 JoeJulian It just seems to me that would be the most efficient. And I presume you're storing the filenames in some sort of database reference so the actual filename is unimportant.
14:10 E-T exactly! But security of data getting written is then only on the application side.
14:11 JoeJulian Isn't it anyway?
14:13 JoeJulian There's a "worm" mode, I can't remember if it's a mount option or a volume option, that would be very reasonable to provide for your expectations. File a bug report with that feature request (to begin with).
14:14 JoeJulian Seriously, glusterbot? I blame myself though... I neglected to make that regexp case insensitive.
14:14 JoeJulian file a bug
14:14 glusterbot http://goo.gl/UUuCq
14:15 E-T true. it is implemented. but say I have to write 1000 files. 10% will not get written (because one out of ten bricks is down). this will need manual intervention.
14:16 JoeJulian So in my thinking process, if a volume is worm (write once read mostly) then you don't have to worry about filename contention and you can just write the file to the next dht subvolume if the hash search fails.
14:16 JoeJulian Ugh... my language skills today look like I'm writing with 2 hours sleep...
14:16 JoeJulian ... oh, I am...
14:20 E-T ^^ - jap -  thats about what i am looking for
14:20 JoeJulian So what I was suggesting was a simple loop. You're using some sort of random filename generator to write those 1000 files (and 10% is a lot uglier than I was thinking, too). While open fails, generate a new filename and try to open again.
14:21 JoeJulian You'll have a 90% chance the new filename will be on a working brick.
14:23 E-T mhhh - kind of a solution. but more or less just a work around. preventing this scenario already on the gluster client would be more smooth...
14:23 wushudoin left #gluster
14:25 yinyin joined #gluster
14:28 JoeJulian So, E-T, file a bug report at the following link, for the enhancement request, where in worm mode, if a dht_layout_search fails, choose another brick.
14:28 glusterbot http://goo.gl/UUuCq
14:28 JoeJulian at that link *
14:28 JoeJulian If you want to ,,(hack) on it yourself, the file I'm looking at is xlators/cluster/dht/src/dht-helper.c
14:28 glusterbot The Development Work Flow is at http://goo.gl/ynw7f
14:36 E-T cheers - i will do so !
14:39 luis_silva joined #gluster
14:55 manik joined #gluster
14:58 flrichar joined #gluster
15:00 vpshastry left #gluster
15:01 m0zes anyone using the glusterfs hadoop integration with 3.3.1?
15:07 yinyin_ joined #gluster
15:16 wushudoin joined #gluster
15:16 wushudoin left #gluster
15:17 m0zes can I use the hadoop integration with an existing gluster volume? one with the root containing ${user.name}/ ?
15:23 daMaestro joined #gluster
15:24 johnmark m0zes: I think so? I think it should work with anything >= 3.3.x
15:30 m0zes johnmark: I guess my question was more related to how it would behave with data that wasn't added via the hadoop-integration client.
15:30 luckybambu joined #gluster
15:31 m0zes johnmark: also, http://blog.beocat.cis.ksu.edu/posts/2​013/05/01/rebooting-live-fileservers-r​unning-a-glusterfs-distributed-volume/
15:31 glusterbot <http://goo.gl/5Cbct> (at blog.beocat.cis.ksu.edu)
15:31 bchilds mazes : yes you can, you don't need an explicit ingest like hdfs
15:31 bchilds *m0zes sorry
15:32 m0zes bchilds: hooray :)
15:32 vpshastry joined #gluster
15:32 m0zes do we need the hdfs-integration on just the fileservers or on all the nodes as well?
15:34 bchilds you can place it on all or some
15:34 bchilds the more you have the better your locality + performance
15:34 m0zes how does locality work if the data all goes back to the fileservers anyway?
15:36 glusterbot New news from newglusterbugs: [Bug 959486] RDMA transport transient bad file descriptor error (dd, iozone, java sequence writer tested) <http://goo.gl/8yp7k> || [Bug 959477] nfs-server: stale file handle when attempting to mount directory <http://goo.gl/28GOf>
15:37 bchilds i don't know your arch. but generally we try to keep the task trackers + gluster nodes 1:1. the locality is ultimately handled by FUSE, we're just trying to assist it by landing jobs not he node with the data
15:37 johnmark m0zes: that's an interesting idea. what is your particular use case?
15:40 m0zes I've been tasked with getting small subsection of our existing cluster setup for hadoop, but I want all the data stored on the existing fileserver pair
15:42 bchilds ah. interesting
15:42 johnmark m0zes: that is interesting
15:42 johnmark of course, I'd like to see a write up of that process once you get it working :)
15:43 m0zes johnmark: that is definitely something I plan on doing more of. :)
15:44 johnmark m0zes: which reminds me, I still owe you guys a visit
15:44 bchilds i've thought a good use case is [production file server cluster] ---nightly backup---> [backup of prod cluster w/hadoop]
15:44 johnmark bchilds: can we generalize the data locality bits? so it's not just with the HDFS shim?
15:44 m0zes johnmark: we're planning a CI days in the fall. not sure on a date yet.
15:45 johnmark m0zes: excellent
15:45 bchilds you could determine locality just from xfs attribute.. so you could do m/r w/o shim..
15:46 johnmark bchilds: ok, got it. have you checked out pmux yet?
15:46 bchilds but its not an HDFS shim. HDFS is gone with gluster.. gluster takes over at a higher interface level
15:46 bchilds no not yet
15:46 johnmark bchilds: got it, ok
15:47 bchilds ruby in m/r now hah
15:47 bchilds i heard someone talking about javascript too
15:47 bchilds everyone wants a piece of the m/r action
15:48 johnmark bchilds: heh heh :) yeah
15:50 nueces joined #gluster
15:53 Supermathie grrrr fuser mount crashed again
15:54 vpshastry joined #gluster
15:55 DEac- joined #gluster
16:04 manik joined #gluster
16:07 sander^work joined #gluster
16:08 yinyin_ joined #gluster
16:11 flrichar joined #gluster
16:13 aliguori joined #gluster
16:15 vpshastry1 joined #gluster
16:20 nickw joined #gluster
16:21 vpshastry joined #gluster
16:26 vpshastry1 joined #gluster
16:33 red_solar joined #gluster
16:39 dumbda joined #gluster
16:39 dumbda Guys I have an issue
16:39 dumbda I have 2 Ubuntu web servers where I run glusterfs server, client
16:40 dumbda i've created 2 volumes /var/www/dom/media for content and //var/www/dom/var for cache
16:41 dumbda Each server is running the gluster server and client so that we have some redundancy and get to keep the data replicated. The var volume is written and read very heavily
16:41 dumbda The problem we are encountering is that randomly the mounts will break and when you perform a ls -lah on the directory it is showing as d???????. To resolve the issue all we have to do is umount the directory and remount it.
16:42 dumbda glusterfs log shows:
16:42 dumbda 2013-05-02 11:32:02.105021] I [client3_1-fops.c:502:client3_1_unlink_cbk] 0-site-media-client-1: remote operation failed: No such file or directory  [2013-05-02 11:32:02.105270] I [client3_1-fops.c:502:client3_1_unlink_cbk] 0-site-media-client-0: remote operation failed: No such file or directory
16:42 dumbda [2013-05-02 11:32:02.105299] W [fuse-bridge.c:911:fuse_unlink_cbk] 0-glusterfs-fuse: 11806336: UNLINK() /catalog/product/cache/1/image/1000x1​000/9df78eab33525d08d6e5fb8d27136e95/​v/e/vera-wang-with-love-vase-0915740$
16:42 karoshi joined #gluster
16:42 dumbda What can be causing this issue?
16:45 JoeJulian Those are information and warning, so no errors... Not sure if that's the cause. What about the brick logs, did you check those?
16:45 ctria joined #gluster
16:45 JoeJulian I imagine it's some race condition though.
16:46 dumbda No I should I guess
16:46 dumbda YEah the servers are on ESX servers
16:46 dumbda 2 indentical Ubuntu guests
16:47 JoeJulian 3.3.1?
16:47 dumbda 3.2.5
16:47 dumbda Linux VDED-XXX-XXX 3.2.0-39-generic #62-Ubuntu SMP Thu Feb 28 00:28:53 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
16:48 JoeJulian @ppa
16:48 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
16:49 JoeJulian Not saying that it will fix your race condition, but if you have a way of testing, it might be worth it.
16:49 dumbda You think it might be the cause?
16:49 dumbda old gluster package?
16:50 dumbda Was there any reports on simillar issues?
16:50 JoeJulian There have been a number of performance tweaks since 3.2. It's possible.
16:51 dumbda Ok let me look trough brick logs and see if there is anything unusual.
16:51 dumbda Thank you.
16:53 JoeJulian Hrm... the Magento code for image caching is pretty inefficient. :(
16:59 thomasl__ joined #gluster
17:23 _ilbot joined #gluster
17:23 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
17:23 jskinner joined #gluster
17:26 JoeJulian Since that's a null inode on a callback, I would probably enable trace logs to see what path got us there.
17:27 dustint joined #gluster
17:40 manik joined #gluster
17:45 manik joined #gluster
17:45 mattf joined #gluster
18:06 mattf joined #gluster
18:06 mattf joined #gluster
18:08 mattf joined #gluster
18:08 mattf joined #gluster
18:22 jbrooks Hey, I have a q on the gluster test framework. The glusterfs/tests/README file says to build and install the version you want to test, then it tell you how *not* to mount glusterfs, then it tells you to execute run-tests.sh from the root -- is the test framework expecting you to create volumes, put data in them, mount them, and *then* run-tests.sh?
18:22 jbrooks https://forge.gluster.org/glusterfs-core​/glusterfs/blobs/raw/master/tests/README
18:23 glusterbot <http://goo.gl/83zRA> (at forge.gluster.org)
18:23 lalatenduM joined #gluster
18:25 hagarth jbrooks: all tests create their own volumes and run tests there.
18:26 jbrooks hagarth, all right, cool
18:27 jbrooks hagarth, when I run that script, it complains, right off the bat: Volume patchy does not exist
18:27 jbrooks and then many other errors that perhaps stem from that
18:28 hagarth jbrooks: do you have gluster in $PATH? I have seen that error when gluster is not available in $PATH.
18:28 jbrooks hagarth, I installed the rpm of the alpha 3
18:29 jbrooks Yes, it's in my path
18:29 jbrooks I installed the alpha 3 rpm, but am running the tests from git, with the release 3.4 branch checked out
18:30 jbrooks I'm running as root, don't know if that could be causing a problem?
18:30 hagarth you would be required to run as root.
18:30 pithagorians joined #gluster
18:31 jbrooks OK
18:31 jbrooks I'm helping out w/ the test day, and John Mark wants to use the test framework, so I'm looking at how one does that ;)
18:32 hagarth jbrooks: can you set -x in tests/basic/volume.t after cleanup & invoke this script standalone via prove tests/basic/volume.t ?
18:32 jbrooks hagarth, how do I clean up?
18:32 hagarth that might give a clue on why creation of volume patchy failed.
18:33 jbrooks If there isn't a script for it I can restore my snapshot
18:33 hagarth jbrooks: the framework takes care of cleaning up. I referred to cleanup in line 6 of tests/basic/volume.t
18:33 jbrooks Oh I see
18:35 vpshastry1 left #gluster
18:37 jbrooks hagarth, volume.t completes successfully, it's bd.t that has the error. The first error there seems to be: Wrong brick type: device, use <HOSTNAME>:<export-dir-abs-path>
18:41 hagarth jbrooks: you would need lvm2-devel for bd.t to succeed
18:41 jbrooks hagarth, cool, I'll install it
18:42 hagarth jbrooks: I will be afk now. Shoot me an email if you need help with this framework or if we want to have more tests for the testday.
18:42 jbrooks hagarth, do you know if the test framework is/will be packaged?
18:42 jbrooks hagarth, will do
18:42 jbrooks thanks
18:43 hagarth jbrooks: no plans for that as of now
18:43 jbrooks cool
18:55 Kioo joined #gluster
19:26 y4m4 joined #gluster
19:27 y4m4 joined #gluster
20:01 gbrand_ joined #gluster
20:35 y4m4 joined #gluster
20:50 jbrooks got a kernel panic while running the gluster test framework -- guess that counts as a "fail"
20:53 JoeJulian jbrooks:  For the big 3.4 testing day coming up you guys should talk to #osuosl to see about getting a bunch of VMs for people to use for the day of testing?
20:53 jbrooks JoeJulian, have they done that in the past?
20:53 JoeJulian Not that I'm aware of.
20:54 jbrooks It sounds complicated ;)
20:54 JoeJulian I could see rackspace doing it, but it would be way more complicated.
20:55 mattf joined #gluster
20:55 mattf joined #gluster
21:06 cjh_ joined #gluster
21:06 cjh_ does anyone know if glupy was folded into the gluster official repo?
21:13 soukihei I am getting errors from my glusterfs client for the brick I have mounted stating 'invalid argument'. It appears I have the volume mounted correctly: :/brick01 on /mnt type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072) Any thoughts on why I cannot write to the filesystem?
21:14 JoeJulian soukihei: version?
21:15 soukihei v3.3.1 on centos6
21:15 soukihei in aws
21:18 JoeJulian And all the mount error says is "invalid argument"? Usually it's more verbose...
21:18 soukihei touch /mnt/file1
21:18 soukihei touch: setting times of `/mnt/file1': Invalid argument
21:18 soukihei the mount command worked just fine. It is a problem of not being able to write the filesystem after it is mounted
21:19 JoeJulian Well that sounds like split-brain.
21:19 JoeJulian Have you looked in your client log yet?
21:19 JoeJulian Or checked gluster volume heal $vol info split-brain
21:22 soukihei The client log has a bunch of disconnected messages. Is that normal?
21:22 mattf left #gluster
21:27 JoeJulian Not really, no.
21:27 JoeJulian You want to fpaste the log so I can take a look?
21:29 soukihei I'm going to start over and see if it is something I did in the setup of the volume
21:29 soukihei hopefully it doesn't duplicate itself
21:29 soukihei but thank you
21:35 soukihei Wierd. Blowing away the volume and the filesystems then recreating it all fixed it
21:35 soukihei good exercise though
21:37 JoeJulian jbrooks: Is there supposed to be some content at http://www.gluster.org/community/do​cumentation/index.php/GlusterFest.?
21:37 glusterbot <http://goo.gl/88I8a> (at www.gluster.org)
21:38 JoeJulian Ah, nevermind. I just realized that tbird included the "."
21:45 bennyturns joined #gluster
21:46 chirino_m joined #gluster
21:59 badone joined #gluster
22:04 sjoeboo_ joined #gluster
22:04 portante|ltp joined #gluster
22:27 fidevo joined #gluster
22:28 wN joined #gluster
23:11 zaitcev joined #gluster
23:22 thomasle_ joined #gluster
23:22 badone joined #gluster
23:36 sjoeboo_ joined #gluster
23:43 fubada joined #gluster
23:43 fubada hi, im using gluster to share my kvm machine image store
23:43 fubada basically a 2 node replica, sharing /var/lib/libvirt/images between eachother
23:44 fubada and there seems to be a lot of cpu overhead after moving to gluster
23:44 fubada is that expected?
23:44 fubada my avg kvm server load went from like 2, to ~10
23:56 badone joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary