Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 ws2k33 joined #gluster
00:16 dmyers joined #gluster
00:17 dmyers joined #gluster
00:17 jobewan joined #gluster
00:18 dtrainor joined #gluster
00:25 JoeJulian msolo: I was looking at that email. None that I know of. Does it become consistent after you close the file?
00:26 msolo no, it becomes corrupt in a different way
00:26 msolo the concurrent read is different bits than the read after close
00:26 msolo both are wrong
00:26 msolo i'm running this on top of XFS
00:27 JoeJulian but all your access is through the fuse client, right?
00:27 msolo i'm actually stracing the glusterfsd process doing the writing now
00:27 msolo yes
00:27 JoeJulian Have you filed a bug report?
00:27 msolo i can see the pwrite() for the data that goes missing
00:27 msolo not yet, the tool to reproduce is not readily shareable
00:27 JoeJulian I suspect the data won't stay missing, but that's just a guess.
00:27 msolo and i'm having a hard time getting a python script to reproduce the issue
00:28 JoeJulian of course
00:28 msolo how do you mean "won't stay missing?"
00:28 JoeJulian I'm guessing it's a race condition where the read's happening before the write's finished.
00:29 msolo oh, that is absolutely one issue
00:29 msolo it's plain as day from the strace
00:29 msolo but the other issue is that it is permanently wrong on disk
00:29 * JoeJulian raises an eyebrow...
00:30 JoeJulian I haven't seen anything even remotely like that. I'll need to see it to even make any further speculations.
00:30 coredump|br joined #gluster
00:30 JoeJulian Hopefully you can make that python script fail.
00:31 msolo is there a good place for the strace to upload?
00:31 JoeJulian fpaste.org
00:31 JoeJulian If it's too big for that, a bugzilla attachment.
00:31 msolo i'll prune it down as much as i can
00:36 neofob joined #gluster
00:37 msolo JoeJulian: i'll put a bug together
00:37 msolo in the meantime, here is the paste
00:37 msolo http://fpaste.org/131455/40996378/
00:37 glusterbot Title: #131455 Fedora Project Pastebin (at fpaste.org)
00:37 msolo that first pwrite(38... call, that data vanishes
00:38 msolo the last line, the next pwrite(38, makes it into the file
00:40 msolo what data is going to be the most helpful in a bug report, failing a program that actually reproduces it?
00:57 doo joined #gluster
01:10 msolo ah, ok, i see the issue, the offset to pwrite is not incremented
01:10 msolo and the underlying file has not been opened with _O_APPEND
01:17 recidive joined #gluster
01:40 mkzero joined #gluster
01:51 MacWinner joined #gluster
02:44 recidive joined #gluster
02:56 itisravi_ joined #gluster
03:08 recidive joined #gluster
03:16 bala joined #gluster
03:25 rotbeard joined #gluster
03:29 hagarth joined #gluster
03:30 daxatlas joined #gluster
03:43 necrogami joined #gluster
04:21 glusterbot New news from newglusterbugs: [Bug 1138897] NetBSD port <https://bugzilla.redhat.co​m/show_bug.cgi?id=1138897>
04:30 harish joined #gluster
04:40 anoopcs joined #gluster
04:42 anoopcs joined #gluster
05:46 Philambdo joined #gluster
05:51 nbalachandran joined #gluster
06:00 nbalachandran joined #gluster
06:07 rjoseph joined #gluster
06:35 rjoseph joined #gluster
07:31 zerick joined #gluster
07:38 ramteid joined #gluster
07:54 rejy joined #gluster
08:06 gmcwhistler joined #gluster
08:12 Gabou joined #gluster
08:12 Gabou Hi everybody!
08:13 Gabou I came few days ago to ask some questions about glusterFS, i tested it but I have now more questions.
08:13 Gabou I have 2 vps synced between them, if one of them has a hdd problem and lost some data, it will tell the other server to remove the data too?
08:13 Gabou what about split brain?
08:17 Gabou can I setup something like.. if it's disconnect, put the data in readonly?
08:33 lalatenduM joined #gluster
09:10 aulait joined #gluster
09:35 Philambdo joined #gluster
09:42 soumya joined #gluster
09:52 glusterbot New news from newglusterbugs: [Bug 1138922] DHT + rebalance : rebalance process crashed + data loss + few Directories are present on sub-volumes but not visible on mount point + lookup is not healing directories <https://bugzilla.redhat.co​m/show_bug.cgi?id=1138922>
11:12 LebedevRI joined #gluster
11:12 soumya joined #gluster
11:24 recidive joined #gluster
11:27 capri joined #gluster
11:34 rjoseph joined #gluster
11:52 recidive joined #gluster
12:22 recidive joined #gluster
12:29 harish_ joined #gluster
13:23 diegows joined #gluster
13:38 elico joined #gluster
13:47 recidive joined #gluster
14:34 rotbeard joined #gluster
14:34 recidive joined #gluster
14:49 bala joined #gluster
14:51 lkoranda joined #gluster
14:52 ndevos joined #gluster
14:52 ndevos joined #gluster
15:05 sjm joined #gluster
15:06 bala joined #gluster
15:07 hagarth joined #gluster
15:08 chirino joined #gluster
15:22 sjm left #gluster
15:24 r9x joined #gluster
15:24 osiekhan joined #gluster
15:24 r9x left #gluster
15:25 yol0 joined #gluster
15:33 yol0 hi, i'm am wondering is it possible to grow a brick? I mean let's say i have two servers each server has a raid array each raid array is a brick. now by growing the raid array will it automatically grow the trusted pool?
15:34 yol0 So instead of adding server to grow the pool i wanna add disks to a raid array to grow the pool
15:42 diegows joined #gluster
15:49 edward1 joined #gluster
15:54 Philambdo joined #gluster
15:56 recidive joined #gluster
16:09 qdk joined #gluster
16:27 hagarth joined #gluster
16:38 recidive joined #gluster
16:38 pradeepto joined #gluster
16:53 daxatlas joined #gluster
17:11 hagarth joined #gluster
17:18 daxatlas joined #gluster
17:22 edong23 joined #gluster
17:24 glusterbot New news from newglusterbugs: [Bug 1138952] Geo-Rep: Backport of patches to 3.6 branch. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1138952>
17:38 plarsen joined #gluster
17:47 MacWinner joined #gluster
17:50 sjm joined #gluster
17:53 doo joined #gluster
18:03 sjm1 joined #gluster
18:12 elico joined #gluster
18:14 sjm joined #gluster
18:20 sjm joined #gluster
18:27 cfeller joined #gluster
18:30 cfeller joined #gluster
18:36 sjm left #gluster
19:03 getup- joined #gluster
19:07 daxatlas joined #gluster
19:09 bennyturns joined #gluster
19:10 PsionTheory joined #gluster
19:12 avati joined #gluster
19:12 wgao joined #gluster
19:13 rjoseph joined #gluster
19:13 cultavix joined #gluster
19:13 sauce joined #gluster
19:15 msolo joined #gluster
19:16 diegows joined #gluster
19:21 NuxRo joined #gluster
19:24 Debolaz joined #gluster
19:24 Debolaz joined #gluster
19:29 ThatGraemeGuy joined #gluster
19:29 swc|666 joined #gluster
19:29 AaronGr joined #gluster
19:29 al joined #gluster
19:30 elico joined #gluster
20:07 chirino joined #gluster
20:15 qdk joined #gluster
20:17 Philambdo joined #gluster
20:59 rotbeard joined #gluster
21:14 siXy joined #gluster
21:37 zerick joined #gluster
22:29 m0zes joined #gluster
22:34 daxatlas joined #gluster
22:45 delhage joined #gluster
22:55 glusterbot New news from newglusterbugs: [Bug 1138970] file corruption during concurrent read/write <https://bugzilla.redhat.co​m/show_bug.cgi?id=1138970>
23:24 diegows joined #gluster
23:44 capri joined #gluster
23:56 msolo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary