Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-01-07

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:29 Ethical2ak joined #gluster-dev
00:41 shyam joined #gluster-dev
00:43 zhangjn joined #gluster-dev
01:06 zhangjn joined #gluster-dev
01:07 EinstCrazy joined #gluster-dev
01:52 overclk joined #gluster-dev
02:09 overclk joined #gluster-dev
02:38 zhangjn joined #gluster-dev
02:47 ilbot3 joined #gluster-dev
02:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:55 overclk joined #gluster-dev
03:06 pranithk joined #gluster-dev
03:08 kdhananjay joined #gluster-dev
03:09 gem joined #gluster-dev
03:11 hagarth joined #gluster-dev
03:22 overclk joined #gluster-dev
03:30 gem joined #gluster-dev
03:40 vmallika joined #gluster-dev
03:44 atinm joined #gluster-dev
03:46 overclk rafi, mind giving the bz # of the bug (tiering + bitrot) we were discussing yesterday?
03:59 sankarshan_ joined #gluster-dev
03:59 ppai joined #gluster-dev
04:00 shubhendu joined #gluster-dev
04:07 kanagaraj joined #gluster-dev
04:17 ashiq joined #gluster-dev
04:18 ashiq joined #gluster-dev
04:26 kshlm joined #gluster-dev
04:29 poornimag joined #gluster-dev
04:35 sakshi joined #gluster-dev
04:35 Manikandan joined #gluster-dev
04:45 nbalacha joined #gluster-dev
04:56 kotreshhr joined #gluster-dev
04:58 apandey joined #gluster-dev
05:04 pppp joined #gluster-dev
05:06 pppp joined #gluster-dev
05:11 ndarshan joined #gluster-dev
05:14 EinstCrazy joined #gluster-dev
05:19 pranithk joined #gluster-dev
05:20 EinstCra_ joined #gluster-dev
05:20 gem joined #gluster-dev
05:23 zhangjn joined #gluster-dev
05:26 Apeksha joined #gluster-dev
05:34 zhangjn_ joined #gluster-dev
05:35 rafi overclk: sure
05:37 hgowtham joined #gluster-dev
05:38 Bhaskarakiran joined #gluster-dev
05:38 jiffin joined #gluster-dev
05:39 aravindavk joined #gluster-dev
05:42 rafi overclk: https://bugzilla.redhat.co​m/show_bug.cgi?id=1296048
05:42 glusterbot Bug 1296048: unspecified, unspecified, ---, rhs-bugs, NEW , Attach tier + nfs : Creates fail with invalid argument errors
05:43 ndarshan joined #gluster-dev
05:45 zhangjn joined #gluster-dev
05:48 kdhananjay joined #gluster-dev
05:52 asengupt joined #gluster-dev
05:54 zhangjn joined #gluster-dev
06:00 zhangjn joined #gluster-dev
06:00 vmallika joined #gluster-dev
06:02 zhangjn joined #gluster-dev
06:08 vimal joined #gluster-dev
06:13 Humble joined #gluster-dev
06:19 hagarth joined #gluster-dev
06:21 ggarg joined #gluster-dev
06:31 gem joined #gluster-dev
06:32 hgowtham_ joined #gluster-dev
06:34 Manikandan joined #gluster-dev
06:35 zhangjn joined #gluster-dev
06:36 zhangjn joined #gluster-dev
06:37 kaushal_ joined #gluster-dev
06:46 Saravana_ joined #gluster-dev
06:48 atinm joined #gluster-dev
06:56 itisravi joined #gluster-dev
06:57 gem_ joined #gluster-dev
07:04 zhangjn joined #gluster-dev
07:11 EinstCrazy joined #gluster-dev
07:17 rraja_ joined #gluster-dev
07:29 Manikandan joined #gluster-dev
07:29 skoduri joined #gluster-dev
07:29 EinstCrazy joined #gluster-dev
07:30 atinm joined #gluster-dev
07:46 deepakcs joined #gluster-dev
08:03 atalur joined #gluster-dev
08:05 zhangjn joined #gluster-dev
08:30 ppai joined #gluster-dev
08:47 kshlm joined #gluster-dev
08:50 vmallika joined #gluster-dev
08:51 atinm joined #gluster-dev
08:51 ggarg joined #gluster-dev
09:05 aravindavk joined #gluster-dev
09:09 kshlm joined #gluster-dev
09:11 spalai joined #gluster-dev
09:12 poornimag joined #gluster-dev
09:14 spalai left #gluster-dev
09:32 ashiq_ joined #gluster-dev
09:51 atalur joined #gluster-dev
09:57 pranithk joined #gluster-dev
09:59 zhangjn joined #gluster-dev
10:02 atalur skoduri, are you available now?
10:03 skoduri atalur, in the middle of something...shall ping you back in 5-10min..
10:03 skoduri is it about compound fops?
10:04 atalur skoduri, yes. Okay. :)
10:11 kdhananjay pranithk: ping
10:11 glusterbot kdhananjay: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
10:11 kdhananjay oops
10:11 kdhananjay pranithk: ping. Got a question.
10:13 kdhananjay pranithk: Can we make afr's data self-heal use eager lock too? I mean we can make it take a full lock initially and based on the blocked locks count on the file at some phase of the heal, we can make SH skip the unlock and relock, and instead piggy-back on the previously held full lock?
10:14 kdhananjay and *maybe* we could do the same for entry sh too!
10:16 kdhananjay pranithk: on the other hand, if blocked lk count > 1, we make shd take a chunk lock, and then release the full lock, and fall back to the older ways.
10:16 * kdhananjay doesn't foresee deadlocks as such (yet).
10:17 pranithk kdhananjay: Yes we can do this. :-)
10:17 pranithk kdhananjay: I was thinking it will be duplicate effort once you complete the library you are working on
10:17 kdhananjay pranithk: hmmm yeah, that's always there.
10:18 ashiq joined #gluster-dev
10:31 ggarg joined #gluster-dev
10:39 kdhananjay pranithk: anyway i will just write some quick code to see what performance it yields.
10:42 pranithk kdhananjay: sure
10:48 apandey joined #gluster-dev
11:03 kkeithley1 joined #gluster-dev
11:14 ndevos msvbhat: I have difficulties getting your version of distaf to run only on one volume
11:14 ndevos msvbhat: could you pass me an example of a configuration file that makes distaf use a volume, created out of one brick?
11:15 msvbhat ndevos: What does it say? The error message
11:16 ndevos msvbhat: the tests get run on multiple volumes, at least that is what I understand from the test-names
11:16 * msvbhat tries to write a config file
11:16 ndevos msvbhat: like this: FAIL: test_9_dist_disperse_glusterfs_fuse_mount_noatime
11:17 ndevos I did not configure a disperse volume...
11:17 msvbhat ndevos: What about the run_global_mode?
11:18 msvbhat I think is has been set to False, right?
11:18 ndevos global_mode: False
11:18 msvbhat In which case your test case "fuse_mount_noatime" will be run on all possible volume type
11:19 msvbhat So if you don't want to run the test case on disperse volume, that should be set in the test case doc string in the yaml format
11:19 ndevos ah, yes, setting that to True helps :)
11:19 ndevos nah, any volume should do
11:19 msvbhat Ah, Okay, So you had a volume and wanted to run on only that volume right?
11:20 msvbhat In that case global_mode = True
11:27 Saravana_ joined #gluster-dev
11:48 ndevos msvbhat: sorry was in a call, but yes, exactly that
11:50 poornimag joined #gluster-dev
11:52 kotreshhr joined #gluster-dev
11:53 ndevos msvbhat: is it correct to have the volume created in advance? I get these errors:
11:53 ndevos http://termbin.com/00k4 - as test result
11:54 ndevos http://termbin.com/yq56 - the log
11:54 ndevos http://termbin.com/ofkd - my config file
11:56 hgowtham joined #gluster-dev
11:58 zhangjn joined #gluster-dev
11:59 zhangjn joined #gluster-dev
11:59 zhangjn joined #gluster-dev
12:01 zhangjn joined #gluster-dev
12:05 Apeksha joined #gluster-dev
12:21 kotreshhr left #gluster-dev
12:24 Apeksha joined #gluster-dev
12:40 sankarshan_ joined #gluster-dev
12:44 * msvbhat looking at the logs
12:45 msvbhat ndevos: Looks like some bug in util.
12:45 msvbhat ndevos: Let me look into it
12:46 msvbhat ndevos: But just to confirm my_volume was already present right?
12:46 ndevos msvbhat: yes, my_volume was created on vm016.example.com
12:47 msvbhat ndevos: Okay, Looks like some bad error handling.
12:48 msvbhat ndevos: I will try to find and fix the issue
12:52 ndevos msvbhat: thanks!
13:05 poornimag joined #gluster-dev
13:06 zhangjn joined #gluster-dev
13:08 msvbhat ndevos: The issue should be in the way you specify the client in config yaml
13:08 msvbhat ndevos: In the clients section, Can you make clients a list?
13:08 msvbhat ndevos: Like clients: [ "client_hostname.domain.com" ]
13:08 ndevos msvbhat: I'll try that
13:09 msvbhat ndevos: I could recreate the issue with your config file. Clients should be a list, not a dict
13:09 msvbhat ndevos: Just like nodes
13:09 spalai joined #gluster-dev
13:11 ndevos msvbhat: hmm, I must be doing something wrong... which clients section? the main one, or under volumes?
13:12 msvbhat ndevos: Under volumes section
13:12 msvbhat ndevos: There you specify clients for that particular volume
13:12 ndevos msvbhat: I got this config now: http://termbin.com/h9ke
13:13 msvbhat ndevos: That should be working.
13:13 msvbhat ndevos: Although peers under volumes should also be a list
13:14 ndevos msvbhat: http://termbin.com/kua1 is the result
13:15 * ndevos makes the peers also a list and tries again
13:15 msvbhat ndevos: So right now you deleted the my_volume right?
13:15 msvbhat ndevos: I mean my_volume is not present anymore
13:16 ndevos msvbhat: no, I didnt touch it
13:16 ndevos but yeah, the volume is gone...
13:16 spalai left #gluster-dev
13:16 Ryllise joined #gluster-dev
13:17 msvbhat ndevos: Can you please check?
13:17 msvbhat ndevos: Ah, Okay
13:17 msvbhat ndevos: So now the issue is while peer_probe validation
13:18 msvbhat ndevos: There was a format change in the output of "gluster peer status" recently. Let me check if that is taken care here
13:18 ndevos msvbhat: I re-created the volume, and the tests seem to run now
13:18 msvbhat ndevos: Which version of gluster are you running BTW?
13:19 ndevos msvbhat: distaf seems to delete the volume... http://termbin.com/log2
13:19 ndevos msvbhat: glusterfs-3.7.6-1.fc23.x86_64 - latest in Fedora 23
13:20 ndevos my mount test actually fails, which is expected because I have not applied the patches yet
13:20 ndevos msvbhat: the failure on the commandline is a little ugly, I think? http://termbin.com/knqb
13:20 msvbhat ndevos: Yes, It deletes the volumes now
13:21 ndevos msvbhat: ah, can it create the volume automatically too?
13:21 msvbhat ndevos: Should not delete the volume after all the tests?
13:21 Ethical2ak joined #gluster-dev
13:21 ndevos msvbhat: well, maybe not delete it when I manually created it
13:21 msvbhat ndevos: yIt creates, Yes, But peer_probe validation failed because of some peer status format change
13:22 msvbhat ndevos: Hmm...I'm thinking how do I know volume is manually created
13:22 kdhananjay joined #gluster-dev
13:23 msvbhat ndevos: BTW I can I can supress the ugly Assertio error output.
13:23 ndevos msvbhat: what do I need to change in my config to have the volume created automatically?
13:23 msvbhat ndevos: Hmm.. Let me check that once
13:23 msvbhat ndevos: Nothing actually. If the volume is not created, it will create
13:23 msvbhat ndevos: In this log http://termbin.com/kua1
13:24 ndevos msvbhat: oh, ok, but what bricks will it use?
13:26 msvbhat ndevos: Since the volume is not there it tried to create it. But then it tried to peer_probe and there we have peer_status validation which failed
13:27 msvbhat ndevos: Ah, That is right now /bricks/brick{0..n} I think.
13:27 msvbhat ndevos: Let me check that once
13:28 msvbhat ndevos: Yes, It chooses /bricks/brick0 /bricks/brick1 in each servers
13:28 ndevos ok
13:29 msvbhat ndevos: Right now it is hardcoded. The plan is to make it configurable using nodes: section in yaml
13:29 msvbhat ndevos: But not implemneted yet
13:29 Saravana_ joined #gluster-dev
13:30 ira joined #gluster-dev
13:30 msvbhat ndevos: I have some questions on that
13:30 msvbhat ndevos: Like whats the best way to do it
13:31 msvbhat ndevos: I was planning on making use of gdeploy for the same
13:31 msvbhat ndevos: Do you think it is a good idea? To make use of gdeploy to provision the backend bricks?
13:32 ndevos msvbhat: maybe yes, but I would definitely like an option to use a prepared/existing environment too
13:33 msvbhat ndevos: Yeah, So what I am thinking is like this
13:36 msvbhat ndevos: Each node will have devices or bricks section
13:37 ndevos msvbhat: yeah, that makes sense
13:37 msvbhat ndevos: Both should be mutually exclusive. If both are specified we consider bricks and ignore devices
13:38 msvbhat ndevos: If bricks are specified, we assume briks are provisioned already
13:39 ndevos msvbhat: sounds good to me
13:43 josferna joined #gluster-dev
13:55 ndevos msvbhat: I do not get the issue with peer probing/checking when I use both servers, it only happens when there is one storage server in the pool
14:01 msvbhat ndevos: Just a sec
14:01 EinstCrazy joined #gluster-dev
14:02 shyam joined #gluster-dev
14:14 ndevos yay, msvbhat, I managed to get the volume created automatically, and run my tests (after fixing my python code)
14:15 ndevos and http://termbin.com/ma7h has the result of 2 tests, one standard Fedora package, and one patched version :D
14:25 ira joined #gluster-dev
14:28 jiffin joined #gluster-dev
14:30 EinstCrazy joined #gluster-dev
14:32 msvbhat ndevos: Great, Can you send that patch as well?
14:32 msvbhat ndevos: So you fixed python code in peer_ops? That's where I thought the issue was
14:33 msvbhat ndevos: Or did you use create_volume directly?
14:33 msvbhat ndevos: /me will be back in an hour
14:33 ndevos msvbhat: no, I added a 2nd server to the peers list in the config file
14:33 shaunm joined #gluster-dev
14:34 ndevos msvbhat: I always like to run tests in a volume with a single brick, so there are is mostly only one storage server in my trusted pool
14:34 ndevos -are
14:37 kotreshhr joined #gluster-dev
14:37 spalai joined #gluster-dev
14:37 spalai left #gluster-dev
14:44 nbalacha joined #gluster-dev
14:45 lpabon joined #gluster-dev
14:45 Humble joined #gluster-dev
14:57 lpabon joined #gluster-dev
15:14 rafi joined #gluster-dev
15:16 kdhananjay joined #gluster-dev
15:19 aravindavk joined #gluster-dev
15:21 apandey joined #gluster-dev
15:41 shubhendu joined #gluster-dev
15:42 kotreshhr joined #gluster-dev
15:42 skoduri joined #gluster-dev
15:53 kshlm joined #gluster-dev
16:02 gem joined #gluster-dev
16:05 msvbhat ndevos: Right, But there must be a bug in peer_probe library function. Since I generally test in a multi node env, this was never caught I guess
16:05 ndevos msvbhat: yeah, I understand that :) I've also sent you an email with some more notes
16:08 * msvbhat reading email
16:17 ndevos msvbhat: oh, and I also had to delete the tests_d/georep directory, it gave all weird errors
16:17 rafi joined #gluster-dev
16:18 ndevos msvbhat: but, maybe thats fixed now when I have a correct config file, hmm
16:19 Humble kshlm++
16:19 glusterbot Humble: kshlm's karma is now 51
16:20 ndevos msvbhat: ah, no, this error http://termbin.com/3iza
16:20 sankarshan_ joined #gluster-dev
16:28 msvbhat ndevos: Ah, they are because there is docstring for those tests
16:28 msvbhat ndevos: I probably should add them there
16:28 ndevos msvbhat: I suspected that, but deleting them worked too ;-)
16:28 msvbhat ndevos: I only concentrated on example tests
16:28 msvbhat ndevos: Simple solution :)
16:29 ndevos msvbhat: and I just blindly tried to run anything :)
16:29 msvbhat BTW you shouldn't have told me to take it easy other day :) I'm still to fix the README thing :)
16:29 msvbhat ndevos: ^^
16:30 ndevos msvbhat: a useful error when such a docstring is missing/incorrect is a MUST too
16:30 msvbhat ndevos: Yeah, I am noting all of these
16:30 ndevos well, its not ready for general consumption yet, I think, fixing the other bits is more important that the README atm
16:30 msvbhat ndevos: Okay
16:31 msvbhat I will reply to the mail now
16:31 ndevos msvbhat: file issues in your github project?
16:31 ndevos msvbhat: oh, no need to reply to that :)
16:31 kdhananjay joined #gluster-dev
16:32 msvbhat ndevos: Yeah, Please open a github issues thete
16:32 msvbhat *there
16:32 ndevos msvbhat: I am happy to send some pull requests for fixes too, and if there are issues reported, maybe others send some patches ;-)
16:32 msvbhat ndevos: Or I can merge them to github.com/gluster and then open issues there.
16:33 spalai joined #gluster-dev
16:33 ndevos msvbhat: maybe the gluster/distaf would be better, it should have more visibility
16:33 msvbhat ndevos: Yeah, So do you think we can just merge to github.com/gluster and start fixing issue there
16:34 msvbhat ndevos: Great then. I have the accecc I will merge them tomorrow then.
16:34 pranithk joined #gluster-dev
16:34 ndevos msvbhat: sure, I do not think anyone relies on the distaf version there
16:35 ndevos msvbhat: merge, or just push the same master branch to the gluster/distaf repo, either works
16:36 msvbhat ndevos: Okay, I will merge master branch to gluster/distaf and start fixing there
16:36 msvbhat ndevos: Please open issues and I will start fixing them
16:36 ndevos msvbhat: sure, will do!
16:37 msvbhat ndevos++ Great, Thanks
16:37 glusterbot msvbhat: ndevos's karma is now 228
16:37 ndevos msvbhat++ thank you too
16:37 glusterbot ndevos: msvbhat's karma is now 6
16:50 kanagaraj joined #gluster-dev
16:53 spalai left #gluster-dev
17:08 atinm joined #gluster-dev
17:11 nbalacha joined #gluster-dev
18:01 spalai joined #gluster-dev
18:12 ilbot3 joined #gluster-dev
18:12 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
18:19 bfoster joined #gluster-dev
18:27 ira joined #gluster-dev
18:35 ndk joined #gluster-dev
18:43 kotreshhr left #gluster-dev
18:52 lpabon joined #gluster-dev
18:53 spalai joined #gluster-dev
19:29 dlambrig_ joined #gluster-dev
19:38 Javezim joined #gluster-dev
19:40 bfoster joined #gluster-dev
19:42 kbyrne joined #gluster-dev
19:42 kkeithley1 joined #gluster-dev
21:13 dlambrig1 joined #gluster-dev
21:51 owlbot joined #gluster-dev
22:07 ira joined #gluster-dev
22:41 dlambrig1 left #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary