Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-01-25

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 EinstCrazy joined #gluster-dev
01:16 zhangjn joined #gluster-dev
01:25 zhangjn joined #gluster-dev
01:33 dlambrig joined #gluster-dev
02:33 EinstCrazy joined #gluster-dev
02:38 pranithk joined #gluster-dev
03:30 nbalacha joined #gluster-dev
03:46 kdhananjay joined #gluster-dev
03:46 kanagaraj joined #gluster-dev
03:51 itisravi joined #gluster-dev
03:52 shubhendu joined #gluster-dev
04:21 nishanth joined #gluster-dev
04:26 atinm joined #gluster-dev
04:49 aravindavk joined #gluster-dev
04:57 karthikfff joined #gluster-dev
05:02 Manikandan joined #gluster-dev
05:02 skoduri joined #gluster-dev
05:03 josferna joined #gluster-dev
05:06 baojg joined #gluster-dev
05:07 ppai joined #gluster-dev
05:11 ndarshan joined #gluster-dev
05:20 Apeksha joined #gluster-dev
05:23 EinstCrazy joined #gluster-dev
05:33 ashiq joined #gluster-dev
05:34 hgowtham joined #gluster-dev
05:36 mchangir_ joined #gluster-dev
05:45 gem joined #gluster-dev
05:46 skoduri joined #gluster-dev
05:50 ggarg joined #gluster-dev
05:56 deepakcs joined #gluster-dev
06:03 EinstCrazy joined #gluster-dev
06:04 aravindavk joined #gluster-dev
06:05 Bhaskarakiran joined #gluster-dev
06:08 vimal joined #gluster-dev
06:10 nishanth joined #gluster-dev
06:15 pppp joined #gluster-dev
06:18 shubhendu joined #gluster-dev
06:21 rafi joined #gluster-dev
06:35 baojg joined #gluster-dev
06:41 overclk joined #gluster-dev
06:50 atalur joined #gluster-dev
06:53 atinm ppai, mind having a look at http://review.gluster.org/#/c/13237/ ?
06:54 atinm ppai, Kaushal had some comments on patch set 1 regarding the restructuring, I've taken care of it in my 2nd patch set
06:54 atinm ppai, I'd like to see this reflected in the design specs asap
06:55 ppai atinm, sure
06:59 spalai joined #gluster-dev
07:05 EinstCrazy joined #gluster-dev
07:14 rraja joined #gluster-dev
07:15 baojg joined #gluster-dev
07:17 atalur joined #gluster-dev
07:17 Manikandan joined #gluster-dev
07:17 gem joined #gluster-dev
07:24 baojg joined #gluster-dev
07:26 aravindavk joined #gluster-dev
07:37 atalur joined #gluster-dev
07:39 shubhendu joined #gluster-dev
07:39 nishanth joined #gluster-dev
07:43 Manikandan joined #gluster-dev
07:50 EinstCrazy joined #gluster-dev
07:57 baojg joined #gluster-dev
08:00 Saravanakmr joined #gluster-dev
08:06 EinstCrazy joined #gluster-dev
08:06 overclk joined #gluster-dev
08:23 aravindavk joined #gluster-dev
08:30 ppai joined #gluster-dev
08:30 EinstCrazy joined #gluster-dev
08:34 gem joined #gluster-dev
08:38 pranithk1 joined #gluster-dev
08:44 pranithk joined #gluster-dev
08:48 asengupt joined #gluster-dev
08:55 pppp joined #gluster-dev
09:00 zhangjn joined #gluster-dev
09:04 kdhananjay1 joined #gluster-dev
09:07 kdhananjay joined #gluster-dev
09:10 Bhaskarakiran joined #gluster-dev
09:19 EinstCra_ joined #gluster-dev
09:27 Bhaskarakiran joined #gluster-dev
09:29 kdhananjay joined #gluster-dev
09:40 Bhaskarakiran_ joined #gluster-dev
09:48 kdhananjay joined #gluster-dev
09:59 pppp joined #gluster-dev
10:10 overclk itisravi: ping, manu sent an email regarding stack corruption during one of the tests.
10:10 overclk itisravi, the area of code is somewhere around br_stub_fstat_cbk() (in 3.7)
10:10 itisravi overclk: yes saw the mail
10:11 overclk itisravi: I'm suspecting the use/implementation of br_stub_cleanup_local()
10:12 itisravi overclk: is it reproducible on *BSD consistently?
10:12 overclk itisravi: setting up local does an fd_ref() and inode_ref(). During dtor, it does fd_unref() and inode_unref(), but fd_unref() can also do inode_unref() itself.
10:13 itisravi overclk: hmm
10:13 overclk itisravi, yeh, as of now on netbsd.
10:14 overclk itisravi: if you see the stack (the saner one -- with abort() places appropriately), the stack looks good.
10:15 overclk itisravi: and same with other "pointers" (this, inode). therefore my suspicion on br_stub_cleanup_local()
10:15 Manikandan joined #gluster-dev
10:17 itisravi overclk: what is the .t that failed?
10:18 EinstCrazy joined #gluster-dev
10:19 ggarg joined #gluster-dev
10:23 overclk itisravi: ./tests/basic/mount-nfs-auth.t
10:23 itisravi overclk: not sure I understood where fd_unref() by itself is also doing inode_unref()
10:23 kdhananjay joined #gluster-dev
10:24 overclk itisravi: in fd_unref() if refcount reaches zero, fd_destroy() gets called which does inode_unref (fd->inode)
10:24 overclk itisravi: but that's when refcount reaches zero.. during a write that should not happen
10:25 itisravi overclk: hmm
10:26 kdhananjay joined #gluster-dev
10:32 atinm joined #gluster-dev
10:39 kdhananjay1 joined #gluster-dev
10:43 gem joined #gluster-dev
10:43 ggarg joined #gluster-dev
10:43 kdhananjay joined #gluster-dev
10:44 overclk itisravi: in the core, fd refcount is 3, so its not during dtor.
10:45 itisravi overclk: okay
10:46 overclk itisravi: the log message is explainable - fstat() tries to fetch inode context always, while stub only maintains inode context for S_IFREG.
10:47 itisravi overclk: but your patch adds that fix.
10:48 Bhaskarakiran joined #gluster-dev
10:49 itisravi i.e. don't check for bad object if !S_IFREG
10:53 overclk itisravi: yeh (http://review.gluster.org/#/c/13276/). But, what about the corruption that manu talks about. Even without my patch, there should not be corruption (as my patch does not fix any stack corruptions as such :))
10:56 itisravi overclk: agreed. If it always fails on a particular line on nfs-auth.t, we can prolly exit before that line and then run that test line manually. And see if it is the double inode_unref thing/
10:56 itisravi by attaching it to gdb.
10:58 overclk itisravi: looking at the core, it doesn't look like a double unref.
11:17 kdhananjay1 joined #gluster-dev
11:19 aravindavk joined #gluster-dev
11:23 EinstCrazy joined #gluster-dev
11:27 deepakcs joined #gluster-dev
11:30 gem joined #gluster-dev
11:31 atinm joined #gluster-dev
11:41 ilbot3 joined #gluster-dev
11:41 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
11:44 aravindavk joined #gluster-dev
11:46 bfoster joined #gluster-dev
11:49 rafi joined #gluster-dev
11:51 aravindavk joined #gluster-dev
11:51 deepakcs joined #gluster-dev
11:51 ggarg joined #gluster-dev
11:52 atalur joined #gluster-dev
12:28 ppai joined #gluster-dev
12:48 ndevos pranithk, xavih: could one of you reply to this ec question? http://thread.gmane.org/gmane.comp.file-systems.gluster.user/23727
12:49 pranithk ndevos: was it the EC recovery without gluster processes running ?
12:49 ndevos kdhananjay: we need a tool like that ^ for sharding too, I think?
12:49 ndevos pranithk: yeah
12:49 kdhananjay ndevos: yes we do.
12:49 pranithk ndevos: yeah, composing reply for that as we speak
12:50 ndevos pranithk: we used to have a tool that could put back stripes and restore the files in case gluster itself (network issue?) is non-functional
12:50 ndevos pranithk++ thanks!
12:50 glusterbot ndevos: pranithk's karma is now 41
12:51 ndevos kdhananjay: if there is no tool for it yet, could you file a bug so that we wont forget about it?
12:52 kdhananjay ndevos: Sure, i will do that. But i fail to understand when a user would need such a tool.
12:53 ndevos kdhananjay: it's like a fsck tool, users should hopefully not need it, but if they can get their data out of a non-functional environment, they really appreciate it
12:54 kdhananjay ndevos: Hmm ok.
12:55 ndevos kdhananjay: in emergencies, it is sometimes easier/quicker to be able to repair the data and access it through any way, fixing Gluster (or hardware/network) could be more time consuming/difficult
12:56 kdhananjay ndevos: Got it.
12:56 pranithk ndevos: kdhananjay: Even I asked the user for similar inputs. Let us watch it for user's perspective
12:56 kdhananjay pranithk: Sure.
12:57 ndevos pranithk: I do not think tools to replair data are optional, only very few admins will want to use ec/shard/.. when they can not reconstruct the files manually in case of disaster
12:58 ndevos *repair even
13:01 ndevos like https://github.com/gluster/glusterfs/blob/master/extras/stripe-merge.c
13:08 pranithk ndevos: that seems to read from the mount and writing to other file.
13:08 pranithk ndevos: Oops, wrong
13:08 pranithk ndevos: It seems like the input files need to be on same machine
13:09 ndevos pranithk: yes, I always assumed admins can use it in case they have a full gluster/network outage
13:09 csim mhhh https://build.gluster.org/job/smoke_test_new_builders/15/console
13:09 csim why does it need sudo ?
13:09 csim (and sudo with a tty)
13:10 ndevos pranithk: or, in case of restoring files from local backups (like tapes)
13:10 pranithk ndevos: Let us see what this user says also. I cced xavih as well for his inputs for the same.
13:10 pranithk ndevos: hmm... how does he know the order?
13:10 pranithk ndevos: In EC we don't have such information
13:11 ndevos pranithk: what order? you mean the bricks from the volume? I guess the tool could use a .vol file as input for that
13:12 ndevos csim: I guess it needs sudo in order to mount, and possibly to start the gluster processes too?
13:13 csim ndevos: yeah, but I am not sure we are there yet
13:13 csim but at least, it build, so that's good
13:20 pranithk ndevos: makes sense. Let us get input from more people.
13:20 mchangir_ joined #gluster-dev
13:21 pranithk ndevos: I am not sure if people like to copy the fragments to a location to run this tool. Best would be for the program to do all this taking the .vol
13:21 pranithk ndevos: read from bricks and output file
13:24 dlambrig joined #gluster-dev
13:28 ppai joined #gluster-dev
13:30 ndevos pranithk: it would, but it should definitely be possbile to have all the pieces on a local system, where network is not a requirement
13:30 * ndevos steps out for lunch, bbl
13:34 dlambrig joined #gluster-dev
13:50 ira joined #gluster-dev
13:52 rafi1 joined #gluster-dev
13:55 spalai left #gluster-dev
14:05 overclk joined #gluster-dev
14:11 dlambrig joined #gluster-dev
14:12 nbalacha joined #gluster-dev
14:13 csim so, i need to stop gerrit for a while, is this ok ? (starting a reindex)
14:22 Manikandan joined #gluster-dev
14:40 shyam joined #gluster-dev
14:42 Saravanakmr joined #gluster-dev
15:07 Gaurav joined #gluster-dev
15:13 Manikandan joined #gluster-dev
15:13 purpleidea joined #gluster-dev
15:13 purpleidea joined #gluster-dev
15:14 luizcpg joined #gluster-dev
15:14 luizcpg Hi, Someone can say when gluster 3.7.7 will be released ?
15:14 luizcpg Thanks
15:28 ndevos luizcpg: pranithk is working on that, hopefully it happens in the next few days
15:29 pranithk luizcpg: https://public.pad.fsfe.org/p/glusterfs-3.7.7 has the final list of patches that we are waiting on
15:29 pranithk luizcpg: 5 more patches to go
15:29 luizcpg Cool… I'm facing an annoying but with the arbiter…
15:29 luizcpg http://review.gluster.org/#/c/12479/
15:29 luizcpg this seems to be the patch to fix it...
15:30 ndevos pranithk: how critical are those pending patches? maybe just do a release and get those patches in next month?
15:31 luizcpg http://review.gluster.org/#/c/12768/
15:31 luizcpg ^^ this one seems to be critical…
15:32 baojg joined #gluster-dev
15:34 pranithk ndevos: there is one dataloss and one crash for which patches are there...
15:34 pranithk luizcpg: that is already merged http://review.gluster.org/#/c/12479/
15:35 pranithk luizcpg: I am waiting for http://review.gluster.org/#/c/12768/ to be merged
15:37 luizcpg got it… would be great to have 3.7.7 out asap :)
15:37 josferna joined #gluster-dev
15:37 shubhendu joined #gluster-dev
15:39 pranithk luizcpg: tell me about it! :-). Yep, 5 more patches to go!
15:39 hagarth joined #gluster-dev
15:40 wushudoin joined #gluster-dev
15:41 luizcpg for sure…
15:41 wushudoin joined #gluster-dev
15:41 luizcpg I’m working on a production environment and the arbiter is critical for the solution...
15:42 luizcpg I have just 2 big nodes available and the thrid one (small node) is the arbiter....
15:43 luizcpg Would you say (gluster 3.7.7 with the arbiter) is production ready ?
15:43 pranithk luizcpg: I don't know of a critical problem in arbiter after these patches are merged.
15:44 pranithk luizcpg: I mean known critical problem of course :-)
15:45 luizcpg great… this is what I’m expecting…
15:45 luizcpg but if something goes wrong I’ll let you know.
15:45 pranithk luizcpg: All the critical things were identified by another user Adrian at the time of 3.7.6
15:46 pranithk luizcpg: yep, catch us either here or on gluster-users. Our team is pretty active there.
15:47 luizcpg nice…
15:47 pranithk luizcpg: may I know what is your solution with gluster?
15:48 luizcpg I’m using gluster 3.7.6 with ovirt 3.6.1
15:48 pranithk luizcpg: Ah! got it
15:48 csim https://build.gluster.org/job/smoke_test_new_builders/17/console wow, it seems to work
15:48 luizcpg seems are going good unless this arbiter issue...
15:49 pranithk luizcpg: yeah :-(. I found it pretty late while looking at the code for something completely unrelated.
15:49 pranithk luizcpg: sorry about that :-(. We should have caught it a lot earlier
15:49 luizcpg I was able to create the replica 3 + arbiter, but suddenly (after pushing the environment)  the total space of the volume was limited to the arbiter total space available.
15:50 pranithk luizcpg: got it
15:50 luizcpg it seems the arbiter was converted on a regular brick…
15:50 pranithk luizcpg: totally!
15:50 luizcpg that’s why I need this patch… :)
15:51 pranithk luizcpg: yep. It is delivered!
15:51 luizcpg awesome… I’ll try and let you know...
15:51 pranithk luizcpg: hey, I need to wind down for today. Feel free to CC pkarampu@redhat.com on gluster-users mails if you get into any problems...
15:51 luizcpg for sure… thanks man
15:52 pranithk luizcpg: cya!
15:54 hagarth csim: my gerrit keys started functioning again!
15:56 csim hagarth: uh ?
15:56 csim I mean "yeah I did something but that's too complciated to explain"
15:56 csim (no, i did nothing in fact)
15:56 csim hagarth: so you did fix it ?
15:59 csim also, the centos6 builder in the new server is finally working (and not building anything as root)
16:03 Gaurav joined #gluster-dev
16:18 Gaurav joined #gluster-dev
16:30 spalai joined #gluster-dev
16:34 pppp joined #gluster-dev
16:39 rafi joined #gluster-dev
16:42 Manikandan joined #gluster-dev
16:43 gem joined #gluster-dev
16:54 baojg joined #gluster-dev
17:17 spalai left #gluster-dev
17:18 spalai joined #gluster-dev
17:29 Manikandan joined #gluster-dev
17:34 kkeithley joined #gluster-dev
17:34 hagarth joined #gluster-dev
17:36 shubhendu_ joined #gluster-dev
17:36 ndk joined #gluster-dev
17:37 dlambrig1 joined #gluster-dev
17:38 bfoster joined #gluster-dev
17:38 Gaurav joined #gluster-dev
17:40 raghu joined #gluster-dev
17:48 shubhendu__ joined #gluster-dev
17:52 gem joined #gluster-dev
18:24 baojg joined #gluster-dev
18:46 EinstCrazy joined #gluster-dev
18:49 shubhendu joined #gluster-dev
18:54 spalai joined #gluster-dev
19:06 lalatenduM joined #gluster-dev
19:49 EinstCrazy joined #gluster-dev
19:54 baojg joined #gluster-dev
20:11 baojg joined #gluster-dev
21:06 ndevos hmm, a change in a header file and smoke tests fail with: /opt/qa/smoke.sh: line 35: cd: /mnt: Connection refused
21:06 ndevos local testing just worked fine.... /me is confused
21:09 EinstCrazy joined #gluster-dev
21:26 baojg joined #gluster-dev
21:48 hagarth joined #gluster-dev
22:56 baojg joined #gluster-dev
23:16 shyam left #gluster-dev
23:31 EinstCrazy joined #gluster-dev
23:57 EinstCrazy joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary