Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-11-19

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 zhangjn joined #gluster-dev
00:09 zhangjn joined #gluster-dev
00:10 zhangjn joined #gluster-dev
00:23 hgichon joined #gluster-dev
00:59 shyam joined #gluster-dev
01:07 zhangjn joined #gluster-dev
01:08 zhangjn_ joined #gluster-dev
02:24 kbyrne joined #gluster-dev
02:25 sac joined #gluster-dev
02:47 ilbot3 joined #gluster-dev
02:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:57 shubhendu joined #gluster-dev
03:22 pranithk joined #gluster-dev
03:29 aravindavk joined #gluster-dev
03:40 ppai joined #gluster-dev
03:51 overclk joined #gluster-dev
03:57 rafi joined #gluster-dev
03:59 rafi joined #gluster-dev
04:06 rafi joined #gluster-dev
04:08 rafi joined #gluster-dev
04:11 atinm joined #gluster-dev
04:12 dlambrig_ joined #gluster-dev
04:13 rafi joined #gluster-dev
04:13 kdhananjay joined #gluster-dev
04:15 itisravi joined #gluster-dev
04:19 aravindavk joined #gluster-dev
04:22 gem joined #gluster-dev
04:26 nishanth joined #gluster-dev
04:33 rafi joined #gluster-dev
04:33 sakshi joined #gluster-dev
04:36 skoduri joined #gluster-dev
04:36 kanagaraj joined #gluster-dev
04:47 rafi joined #gluster-dev
04:48 mchangir|wfh joined #gluster-dev
04:51 rafi joined #gluster-dev
04:53 rafi joined #gluster-dev
04:58 rafi joined #gluster-dev
04:58 asengupt joined #gluster-dev
04:58 aspandey joined #gluster-dev
04:59 nbalacha joined #gluster-dev
05:00 pppp joined #gluster-dev
05:00 rafi joined #gluster-dev
05:04 vimal joined #gluster-dev
05:06 rafi joined #gluster-dev
05:07 nbalacha joined #gluster-dev
05:09 rafi joined #gluster-dev
05:17 rafi joined #gluster-dev
05:27 rjoseph joined #gluster-dev
05:28 Bhaskarakiran joined #gluster-dev
05:32 ndarshan joined #gluster-dev
05:33 jiffin joined #gluster-dev
05:40 hgowtham joined #gluster-dev
05:40 Manikandan joined #gluster-dev
05:43 rafi joined #gluster-dev
05:52 anekkunt joined #gluster-dev
06:00 hgowtham joined #gluster-dev
06:01 vmallika joined #gluster-dev
06:01 ashiq joined #gluster-dev
06:04 josferna joined #gluster-dev
06:12 mchangir|wfh joined #gluster-dev
06:12 atalur joined #gluster-dev
06:24 atinm nbalacha, pm
06:24 pranithk joined #gluster-dev
06:25 nbalacha atinm, hi
06:25 atinm nbalacha, I am seeing an warning message when I execute gluster volume set help in glusterd log
06:25 atinm [2015-11-19 06:23:18.036573] W [MSGID: 101095] [xlator.c:142:xlator_volopt_dynload] 0-xlator: /usr/local/lib/glusterfs/3.8dev/xlator/cluster/tier.so: undefined symbol: dht_methods
06:27 nbalacha atinm, is this upstream master?
06:27 atinm nbalacha, yes
06:27 nbalacha atinm, I will take a look and get back
06:28 atinm nbalacha, thanks
06:28 kdhananjay joined #gluster-dev
06:33 Saravana_ joined #gluster-dev
06:36 Manikandan joined #gluster-dev
06:36 sankarshan joined #gluster-dev
06:37 atalur_ joined #gluster-dev
06:37 spalai joined #gluster-dev
07:07 jiffin joined #gluster-dev
07:08 gem joined #gluster-dev
07:08 EinstCrazy joined #gluster-dev
07:16 Saravana_ joined #gluster-dev
07:22 kshlm joined #gluster-dev
07:34 nbalacha atin, can you file a BZ for this?
07:37 Manikandan joined #gluster-dev
07:44 RedW joined #gluster-dev
07:56 zhangjn joined #gluster-dev
07:59 zoldar joined #gluster-dev
08:01 zoldar pranithk: Hi. It's Adrian from ML, the one who reported the locking issue with arbiter volume setup.
08:01 pranithk zoldar: hey!
08:02 pranithk zoldar: Did you get a chance to read the commit message of the patch?
08:02 zoldar pranithk: give me a sec, will take a second look
08:02 pranithk zoldar: It is a bad bug :-(. Whenever we upgrade it becomes replicate volume without arbiter :-(
08:02 ndevos rastar, obnox: just reading a little about your testing conversation yesterday
08:03 pranithk zoldar: The reason I asked you to come online is to fix the volume so that it becomes arbiter volume again...
08:03 ndevos rastar, obnox: the easiert to test, is to build RPMs and install the glusterfs-regression-tests RPM on a VM, and run the /usr/.../run-tests.sh script
08:03 pranithk zoldar: But you need to take in this patch and build debs/rpms as the next release will take weeks.
08:03 ndevos rastar, obnox: building the RPMs is pretty simple too (if you're on Fedora/RHEL): make -C extras/LinuxRPMS rpms
08:04 ndevos (or something like that)
08:04 zoldar pranithk: yeah, clear
08:04 zoldar pranithk: I was actually about to give up on it and go with 2-node setup
08:05 zoldar pranithk: does that patch apply cleanly to the latest release or should it be applied to the version from the repository?
08:05 pranithk zoldar: Since this is a new feature, in the initial days it will be a bit bumpy. But we feel over time this will be the preferred way as it also prevents split-brains.
08:05 zoldar pranithk: yes, that's what I was counting on
08:06 zoldar pranithk: there's still no way to add 3rd, arbiter brick to the existing volume, right?
08:06 pranithk zoldar: I feel it should apply cleanly to the latest release. If not let me know, I can change the patch to apply to whatever you have.
08:07 pranithk zoldar: itisravi is going to do that for the upcoming releases.
08:08 Humble obnox, ndevos the gluster org redirection to developers index is fixed .. http://gluster.readthedocs.org/en/latest/Developer-guide/Developers-Index/
08:08 Humble thanks tigert__
08:08 Humble tigert++
08:09 glusterbot Humble: tigert's karma is now 15
08:09 pranithk zoldar: After you apply the patches and build the debs, let me know. We also need to make some changes to the glusterd-store so that we can recover the volume as arbiter.
08:09 ndevos thanks guys, Humble++ tigert++
08:09 glusterbot ndevos: Humble's karma is now 26
08:09 glusterbot ndevos: tigert's karma is now 16
08:09 Humble yw
08:10 Humble ndevos, currently we have a banner/widget in gluster.org about "GusterFS 3.5.4 maintenance release "
08:10 Humble may be we need to remove this
08:10 Humble it gives an impression that the latest version is "3.5"  :)
08:10 overclk_ joined #gluster-dev
08:10 ndevos Humble: I guess so, 3.5.6 (iirc) is out for a while too
08:10 Humble Isnt it?
08:11 ndevos Humble: yes, that too :)
08:12 Humble may be once again tigert can help us :)
08:12 Humble otherwise we need to send a PR
08:14 Humble ashiq, ping, pm
08:14 zoldar pranithk: ok, I'll try
08:16 ndevos Humble: I guess tigert would appreciate a PR ;-)
08:17 Humble :)
08:24 jiffin ndevos: ping can u look into http://fpaste.org/292197/47921383/
08:26 ndevos jiffin: the Linux nfs-client tries to use one connection for multiple nfs-mounts to the same server, if one connection (mount, not showmount) hangs, an other will hang too
08:27 jiffin ndevos: thats ok
08:27 ndevos jiffin: we've seen hangs related to multi-threaded e-poll and disperse volumes before, I thought it got improved in recent versions
08:28 jiffin ndevos: hmm
08:28 jiffin ndevos: but when I restart the nfs server again , I/O resumes gracesfully
08:29 ndevos jiffin: well, the NFS-client will need to re-connect, and the hang in mt-epoll would have been resolved with the restart of the gluster/nfs process
08:31 jiffin ndevos: so if reduce epoll hread into 1 will it work
08:31 ndevos jiffin: I would expect that, if it doesnt, it suggests some other issue
08:32 jiffin ndevos++ thanks
08:32 glusterbot jiffin: ndevos's karma is now 216
08:35 kanagaraj joined #gluster-dev
08:37 tigert Humble: hey
08:39 obnox Humble: thanks!
08:39 obnox Humble++
08:39 glusterbot obnox: Humble's karma is now 27
08:39 obnox tigert++
08:39 glusterbot obnox: tigert's karma is now 17
08:40 jiffin ndevos: one more think , can identify which thread got hang from gdb of gluster-nfs??
08:41 ndevos jiffin: it is possible, but mt-epoll is a little funky and I wont be able to guide you through that
08:41 jiffin ndevos: that's sad :(
08:41 ndevos jiffin: I think KP gave a session about it, I didnt see it yet, but that should help
08:42 * jiffin attended that session, but didn't remember it clearly
08:45 pranithk xavih: hey, did you get a chance to see my comment for the internal fop handling in ec?
08:48 ndevos jiffin: do you know if that session was recorded?
08:48 jiffin ndevos: yes it was
08:49 ndevos jiffin: it would be nice to know where the recording is :)
08:49 jiffin ndevos: as far as i remember
08:50 jiffin ndevos: I am not sure who has it , may be anekkunt can help u with that
08:50 jiffin ndevos: otherwise I will check with sas
08:51 Manikandan joined #gluster-dev
08:51 ndevos jiffin: yeah, sas sent the email about the session, but I do not immediately see a follow-up with link to the recoreding (and hopefully it can be posted publicly)
08:53 deepakcs joined #gluster-dev
08:55 pranithk xavih: I was thinking about the anon-fd we were using for internal reads and how it gives problems when unlink of the file happens. I am thinking of implementing a translator which will move the file to internal directory on unlink. We can convert operations from anon-fd to operations on that file. What do you think?
08:59 Saravana_ joined #gluster-dev
08:59 anoopcs y
09:07 zoldar pranithk: I have build a patched release of the packages and installed them on all three nodes. Doing a reboot now. After that I will convert one of the volumes back to the arbiter setup
09:11 pranithk zoldar: Do you know how to do it? we need to edit some files in /var/lib/glusterd
09:11 xavih pranithk: I don't see much difference between using a special pid for self heal and using another one for internal fops. I think this approach is cleaner than adding more special xdata entries
09:12 xavih pranithk: anyway the solution works and could be merged if you want
09:13 pranithk xavih: But there seems to be a problem with root-squash if we change NFS_PID i.e. pid==1
09:13 zoldar pranithk: I actually take the easy route - copy the data from existing volume, stop it and remove and then recreate it
09:13 zoldar pranithk: If you know a better way, please tell
09:14 xavih pranithk: why we need to change NFS_PID ? we should only create a new pid
09:14 Humble tigert, hi
09:14 xavih pranithk: for accessing unlinked files, wouldn't it be better to unlink the named file on 'rm' and keep the gfid file until all open fd's are closed ?
09:15 pranithk xavih: I mean frame->root->pid for the read fop will not be NFS_PID. So server thinks it didn't come from NFS process. SO some authorizations seem to be missing...
09:15 xavih pranithk: this way anonymous fd could open the file even if removed
09:15 pranithk xavih: then new opens on the file will succeed too even after the deletion because the inode for the name will be present in the inode-tables so they could give cached responses.
09:16 xavih pranithk: reads are made as root, so I don't think any authorization will be needed
09:16 pranithk xavih: Another problem with that approach is if the brick dies after deletion we don't know which files to cleanup
09:16 pranithk xavih: But for root-squash there is some special handling...
09:17 pranithk xavih: it changes the uid/gid. I my self am not very clear about that logic. I have been trying to reach that developer
09:17 pranithk zoldar: that works too :-)
09:17 pranithk zoldar: Do let me know if your issue goes away or not. I will be online for a while
09:17 xavih pranithk: I don't know how it works either
09:17 zoldar pranithk: Okay
09:17 xavih pranithk: ok, we can use the xdata approach
09:18 ggarg joined #gluster-dev
09:18 xavih pranithk: after a brick crash, self-heal should know which files to remove, but maybe this special case is not handled
09:20 pranithk xavih: but those gfid files are inside .glusterfs/... we don't have a list of those files anywhere. We will have to crawl the entire brick to find files which only have gfid-links
09:20 ndevos kshlm, csim: it seems that Jenkins fails to get triggers from Gerrit, https://build.gluster.org/gerrit-trigger/ shows a red [o]
09:20 pranithk xavih: We both are thinking in same direction... The reason for this new xlator, that it will make the list of these files and remember in some internal directory...
09:21 kshlm ndevos, I'll check.
09:21 ndevos kshlm: thanks!
09:21 Saravana_ joined #gluster-dev
09:25 lalatenduM joined #gluster-dev
09:25 jiffin ndevos: sas have the session video
09:25 jiffin and he will share it via mail
09:25 ndevos jiffin++ thanks!
09:25 glusterbot ndevos: jiffin's karma is now 15
09:27 pranithk josferna: I merged both the patches for ec which mark internal fops properly. Do you want to test and let me know if there is anything more to be done?
09:34 gem_ joined #gluster-dev
09:36 pranithk xavih: I think we don't need new xlator. We can do it inside posix.
09:39 pranithk xavih: When the final unlink comes on the file, after deleting .glusterfs/ab/cd/abcd... instead of deleting the file, we move it to 'unlinked' directory inside .glusterfs, and mark the inode as unlinked inside posix. If operations come on anon-fd and the file is unlinked, then we use the file path inside .unlinked to open the file and do the operations. When the brick comes up for the first time, we can delete any files present in .gluster
09:40 pranithk xavih: we can also delete these files when the inode is forgotten in posix xlator
09:40 pranithk xavih: what do you think?
09:43 xavih pranithk: There's a problem when the brick comes up after a crash. If you delete the unlinked files at start but there were fd's still open, this brick won't be able to serve the requests and self-heal won't be able to repair it from other bricks that still have them
09:44 xavih pranithk: maybe we could delay the removal of unlinked files for some time, to allow enough time for clients to connect and reopen fd's. Those unliked files that are not reopened by some client could be deleted
09:44 pranithk xavih: hmm... that is a problem now anyway. On disconnects, we close the file.
09:44 xavih pranithk: it's still not perfect thought
09:44 pranithk xavih: I guess that can also be done :-)
09:46 pranithk xavih: At least till now, if we have a file opened and the file is deleted. On disconnect the fd becomes bad.
09:46 pranithk xavih: What you suggest will fix that as well.
09:46 kshlm ndevos, Everything seems to be fine with gerrit and jenkins. The connection works, and querying for changes in jenkins works. Except the listener connection everything else works. I've got no idea why. :(
09:47 pranithk xavih: but there is bigger problem. What if the client which deleted the file crashes.... when do we delete it? Hmm... I think it is a separate problem but with a possible fix we can do at a later stage?
09:47 ndevos kshlm: yeah, I've queried+triggered some of the changes/jobs, that worked fine...
09:48 pranithk xavih: For now I/Ashish will just implement the initial part to handle unlinks when all the clients/bricks are working fine?
09:48 ndevos kshlm: I dont know if there is anything in some log when you click that red [o], it should try to connect again
09:48 ndevos kshlm: maybe Gerrit is not feeding the triggers anymore?
09:49 kshlm ndevos, Nothing in the logs either.
09:49 zoldar pranithk: not that I'm complaining, but maybe there should be clearer indication of arbiter setup in the "info" output? :) Like an annotation by the brick(s) that it's an arbiter.
09:49 xavih pranithk: if the client crashes, I think there's no way to keep the file open
09:50 xavih pranithk: the initial approach is good enough I think. The later case is a different issue I think
09:50 ndevos kshlm: I have no idea how to debug it, what would the next steps be?
09:50 pranithk xavih: okay. Cool. This will solve the unlink problem without any change to ec!. Okay we will get to work in implementing this. Thanks xavi!
09:51 pranithk zoldar: No dude, you should! Users perspective is always better than us developers :-)
09:51 pranithk zoldar: At the moment, do you see (2+1) in volume info?
09:51 aspandey joined #gluster-dev
09:52 zoldar pranithk: yeah, I do, but had to actually look into patch to notice that ;)
09:52 pranithk zoldar: hehe :-). How do you like the output be? wanna help us out?
09:52 pranithk itisravi: ^^
09:53 zoldar something like "Brick3: web-rep:/GFS/system/mail1 (arbiter)" ?
09:53 itisravi Hmm at the moment, it is always the last brick in every replica pair that's the arbiter. So maybe it doesn't make much difference?
09:53 zoldar itisravi: True, but it would be cool if the indiaction of arbiter's presence were more clear
09:54 dlambrig_ joined #gluster-dev
09:56 itisravi zoldar: agreed. It would also make sense if  we have 'configurable' arbiter nodes. i.e select which brick gets to be the arbiter
09:56 kshlm ndevos, I'll try adding a new gerrit server entry. It could probably work.
09:56 ndevos kshlm: okay, and if not, maybe send an email to the maintainers/devel list and explain how to manually trigger runs?
10:04 pranithk zoldar: Do you mind raising a bug for suggesting this improvement? We will be happy to open it ourselves if you don't have the time...
10:05 Saravana_ joined #gluster-dev
10:06 zoldar pranithk: sure, will file it
10:08 kshlm ndevos, Didn't work. I think it's mostly an issue with version incompatibility between jenkins and the plugin.
10:08 kshlm csim had updated jenkins recently. But the plugins haven't been updated.
10:09 ndevos kshlm: oh, I guess thats possible
10:10 kshlm I'll send an announcement on the maintainers list to do manual triggering.
10:10 zoldar stupid question - where's the bug tracker? :)
10:10 kshlm I guess I could also update the plugins now. This is good time as no new builds are getting triggered.
10:14 Manikandan joined #gluster-dev
10:14 csim grmbl, lovely
10:15 csim that's why i was reluctant on using the upstream rpm, they seems to not care about compatibility :/
10:15 kshlm csim, are you talking of jenkins?
10:16 csim kshlm: yes
10:18 zoldar pranithk: I'm sorry, but I think it will be better/quicker if you file the bug yourself - I can't login to gerrit with github - oauth fails me there
10:18 pranithk zoldar: You have to file the bug in bugzilla.redhat.com
10:18 pranithk ndevos: ^^ do you have a document?
10:18 zoldar ouch
10:18 zoldar ok
10:19 kshlm pranithk, glusterbot had a trigger for bugzilla. I don't know the work though.
10:19 kshlm !bugzilla
10:19 kshlm !bugs
10:19 ndevos zoldar: link to the bugzilla is in #gluster , no idea how glusterbot works here
10:20 ndevos glusterbot: help
10:20 glusterbot ndevos: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin. You may also want to use the 'list' command to list all available plugins and commands.
10:20 ndevos glusterbot: list
10:20 glusterbot ndevos: Admin, Alias, Anonymous, Bugzilla, Channel, ChannelStats, Conditional, Config, Dict, Factoids, Google, Herald, Karma, Later, MessageParser, Misc, Network, NickCapture, Note, Owner, Plugin, PluginDownloader, Reply, Seen, Services, String, Topic, Trigger, URL, User, Utilities, and Web
10:20 ndevos glusterbot: MessageParser
10:21 ndevos glusterbot: help MessageParser
10:21 glusterbot ndevos: Error: There is no command "messageparser". However, "Messageparser" is the name of a loaded plugin, and you may be able to find its provided commands using 'list Messageparser'.
10:21 ndevos glusterbot: list MessageParser
10:21 glusterbot ndevos: add, info, list, lock, rank, remove, show, unlock, and vacuum
10:21 kshlm ndevos, its the Factoids plugin IIRC
10:21 bkunal|training joined #gluster-dev
10:21 ndevos kshlm: I think messageparser acts on someone saying "file a bug"
10:22 kshlm ah,
10:22 ndevos kshlm: Factoids would be ,,(bug) or something
10:22 kshlm okay.
10:22 ndevos glusterbot: show file a bug
10:22 glusterbot ndevos: (show [<channel>|global] [--id] <regexp>) -- Looks up the value of <regexp> in the triggers database. <channel> is only necessary if the message isn't sent in the channel itself. If option --id specified, will retrieve by regexp id, not content.
10:22 ndevos glusterbot: show #gluster file a bug
10:22 glusterbot ndevos: (show [<channel>|global] [--id] <regexp>) -- Looks up the value of <regexp> in the triggers database. <channel> is only necessary if the message isn't sent in the channel itself. If option --id specified, will retrieve by regexp id, not content.
10:23 ndevos well, just go to #gluster :)
10:23 vimal joined #gluster-dev
10:24 kshlm Funny! Shouldn't glusterbot be working here too.
10:24 ndevos glusterbot works here too, but with some reduced functionality
10:25 ndevos most of the info glusterbot collects, seems to be channel specific
10:25 ndevos glusterbot: karma kshlm
10:25 glusterbot ndevos: Karma for "kshlm" has been increased 42 times and decreased 0 times for a total karma of 42.
10:27 csim mhh, why do we keep a clone of https://github.com/gluster/jenkins-ssh-slaves-plugin ?
10:29 ndevos csim: because ggarg was working on testing patches to poweron/poweroff slaves on demand
10:29 zoldar pranithk: report filed: https://bugzilla.redhat.com/show_bug.cgi?id=1283570
10:29 glusterbot Bug 1283570: low, unspecified, ---, bugs, NEW , Better indication of arbiter brick presence in a volume.
10:29 ndevos csim: that would have reduced the online time of slaves quite a bit, hopefully saving us some rackspace credit
10:29 ndevos csim: but, I think he got re-assigned to work on other things...
10:30 ndevos csim: some details in https://github.com/gluster/jenkins-ssh-slaves-plugin/commit/41693e37b0eed2a4a3a55da5d4f22c377fb4c823
10:31 csim ndevos: ok, s the trick was that's in a different branch
10:31 ndevos csim: heh, yes
10:31 csim i would love to reduce the number of team
10:31 csim as we have a few team of 2 people :/
10:32 csim like, do we need to have 1 team for web and 1 for gluster planet ?
10:32 ndevos if the people on the teams are basically the same, I think having one "web" team would be sufficient
10:33 pranithk zoldar: Thanks!! keep in touch with either of itisravi, atalur_, kdhananjay and pranithk for anything related to replication/arbiter. We need more people to use it so that we get more feedback to see where to improve things. Thanks a lot for your time!!
10:34 csim ndevos: well, one team is jclift and tiegert, the other is tigert and 4 people
10:35 csim I think getting jclift in the web group would be ok, given he is github admin
10:36 ndevos csim: ask someone in those teams, maybe amye or tigert and have them decide if merging is best
10:36 ndevos csim: I agree that we should try to have few teams, and those teams can have multiple repositories
10:38 zoldar pranithk: thanks for being so responsive :)
10:38 ndevos csim: there is also a 'presentation' team, no idea what that is for
10:39 pranithk zoldar: np :-)
10:39 csim ndevos: yep, and the gluster forge admin team, where e don't use gluster forge anymor e:)
10:39 pranithk zoldar: Do remember the aliases of people who work on replication/arbiter, in case one is not available you can reach others.
10:40 csim GlusterFS IOStat Developers , for a GSOC, should we merge ?
10:50 ndevos csim: about the forge, do you know how far that has been migrated (or kshlm?)
10:51 csim ndevos: all was migrated, no ?
10:51 ndevos csim: I thought not, but have not checked anything recently
10:53 ndevos csim: a quick scan on github shows that at least the nightly builds repository with scripts/wiki is not migrated
10:53 kshlm ndevos, https://github.com/gluster/forge/blob/master/old_forge_repos.txt <- this is still best answer for the forge migrations status.
10:53 ndevos kshlm: ah, thanks!
10:54 ndevos oh, lol, "Niels will probably migrate this" is a nice note
10:54 csim not sure if the document is up to date
10:55 kshlm I was supposed to send out a second round of mail announcing the closure, and I absolutely didn't do it.
10:55 * kshlm should start writing down things he has agreed to do.
10:56 ndevos kshlm: oh, and for the github.com/<user> repos in there, we really should ask them if they want to move it to the gluster org
10:58 mchangir|wfh joined #gluster-dev
10:58 kshlm ndevos, I think justin asked about that in his communications. The owners were free to migrate to their own github accounts or to the gluster org. But all would be linked to from forge-v2.
10:59 ndevos kshlm: hmm, ok, we should state that we have a preference for the gluster org, users will be able to find 'official' things easier
11:01 glusterbot` joined #gluster-dev
11:03 RedW joined #gluster-dev
11:06 ndevos vmallika: do you know if someone reported a coredump while running bug-1242882-do-not-pass-volinfo-quota.t ?
11:07 ndevos vmallika: https://build.gluster.org/job/rackspace-regression-2GB-triggered/15936/consoleFull is the job that failed due to that
11:11 ndevos vmallika++ thanks!
11:11 glusterbot` ndevos: vmallika's karma is now 8
11:21 glusterbot joined #gluster-dev
11:22 jiffin1 joined #gluster-dev
11:24 zoldar pranithk: blocked locks start to pile up again :(
11:25 pranithk zoldar: what!
11:26 pranithk zoldar: did you remount after the upgrade?
11:26 zoldar pranithk: do you want statedump output?
11:26 zoldar I actually did a full reboot
11:26 zoldar of all nodes
11:27 zoldar I mean, I completely wiped the volume and recreated it
11:27 Saravana_ joined #gluster-dev
11:27 pranithk zoldar: got it
11:27 zoldar before that
11:27 pranithk zoldar: could you give me statedumps?
11:27 zoldar sure, just a sec
11:28 pranithk zoldar: I also need the /var/lib/glusterd/vols/<arbiter-vol> zipped
11:28 zoldar pranithk: from every node?
11:29 pranithk zoldar: yes...
11:30 EinstCrazy joined #gluster-dev
11:32 zoldar pranithk: email sent
11:32 pranithk zoldar: give me a minute
11:32 zoldar sure
11:33 zhangjn joined #gluster-dev
11:33 rjoseph joined #gluster-dev
11:33 pranithk zoldar: I am yet to get the mail... refreshing my email client crazy...
11:33 vmallika joined #gluster-dev
11:34 zoldar pranithk: pkarampu at redhat dot com right?
11:38 ira joined #gluster-dev
11:38 pranithk zoldar: yes...
11:39 pranithk zoldar: sometimes it acts up I think... How big were the files?
11:39 zoldar pranithk: all of it is ~100 KB so not that much
11:40 pranithk zoldar: hmm... :-(
11:41 pranithk zoldar: do you mind adding it as attachment for the bug: 1275247, not sure why the mail is not coming :-(
11:41 ndevos atinm: core file caused by tests/bugs/glusterd/bug-913555.t in https://build.gluster.org/job/rackspace-regression-2GB-triggered/15941/consoleFull
11:41 jiffin joined #gluster-dev
11:41 ndevos atinm: do you know if that has been reported before?
11:41 zoldar pranithk: yeah, just a sec
11:41 pranithk zoldar: I am in a hurry, will need to leave office in 20 minutes, want to complete this...
11:42 pranithk xavih: aspandey will be working on the unlink/readv bug....
11:45 * atinm is checking
11:45 zoldar pranithk: files sent
11:46 atinm ndevos, I don't think its been reported earlier
11:46 atinm ndevos, could you raise a bug for this?
11:47 pranithk zoldar: looking into them
11:50 ndevos atinm: oh, looking into the core, it is the same issue that vmallika reported as bug 1283595
11:50 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1283595 unspecified, unspecified, ---, bugs, NEW , rpc_clnt: client process crashed at emancipate (ctx=0x0, ret=-1) at glusterfsd/src/glusterfsd.c:1329
11:51 pranithk zoldar: Are you sure you updated everything?
11:51 pranithk zoldar: I don't see "arbiter_count=1" in the file /system_mail1_cluster-vm/info
11:52 pranithk zoldar: I just created arbiter volume on my setup and it created the volume with this option....
11:52 pranithk zoldar: you just upgraded right?
11:52 pranithk zoldar: I think we need to execute some command after upgrade to change the cluster version... wait
11:52 pranithk kshlm: ^^
11:53 kshlm pranithk, `gluster volume set all op-version <version>`
11:53 vmallika joined #gluster-dev
11:53 pranithk kshlm: it is supposed to be 3.7.6 I think...
11:53 kshlm The version will be 30706 then
11:53 pranithk zoldar: ^^
11:56 zoldar pranithk: yeah, I upgraded
11:56 zoldar so what I need to execute is `gluster volume set all op-version 30706` right?
11:58 zoldar pranithk: ok, op-version bumbped, what shall I do now?
11:58 zoldar pranithk: recreate the volume again?
11:59 pranithk zoldar: create a new volume and see if you see arbiter_count=1 in the new-volume/info file?
11:59 zoldar sure
11:59 zoldar ok this will take a minute
11:59 Manikandan rastar++, thank you :-)
11:59 glusterbot Manikandan: rastar's karma is now 15
11:59 pranithk zoldar: I will be back online after ~5-6 hours... will that be fine?
12:00 pranithk zoldar: which time zone? I am from India
12:00 zoldar pranithk: sure, thanks, I'm gmt+1 now
12:00 pranithk zoldar: will you be available?
12:00 zoldar yeah, later in the night
12:00 zoldar wil try to catch up
12:01 zoldar if not, then tomorrow, no worries
12:01 pranithk zoldar: do one thing. just create a new test volume
12:01 pranithk zoldar: see if that arbiter_count is 1
12:02 zoldar where do I check that?
12:02 pranithk zoldar: /var/lib/glusterd/vols/<new-volname>/info
12:02 zoldar pranithk: yeah, it's there
12:02 zoldar what about existing ones? can I fix them without wipe?
12:03 pranithk zoldar: it was not there for the volume information you gave me.
12:03 zoldar yeah, I see
12:03 atinm ndevos, anekkunt will give a try on 1283595
12:03 pranithk zoldar: yeah, just add it in the 'info' file at the end. stop glusterds and start them again. stop volume and start them again
12:04 zoldar ok, I will add that and just reboot the whole thing
12:04 zoldar and we'll see
12:04 pranithk zoldar: On all the machines...
12:04 zoldar thanks!
12:04 zoldar right
12:04 pranithk zoldar: no need to reboot
12:04 pranithk zoldar: just do the steps I mentioned... :-)
12:04 pranithk zoldar: it will take less time
12:05 ndevos anekkunt++ thanks!
12:05 glusterbot ndevos: anekkunt's karma is now 7
12:05 zoldar pranithk: It's not that simple because there are some mounts automatically handled by PVE, nvm, I will handle that
12:05 zoldar pranithk++
12:05 glusterbot zoldar: pranithk's karma is now 31
12:07 pranithk zoldar: In that case mail me what happened. Attach the same files after you do the process. Along with it, also attach /var/lib/glusterd/glustershd/glustershd-server.vol file to the bug. I will check it once I am back online
12:07 zoldar shouldn't op-version be bumped with upgrade?
12:07 zoldar ok
12:07 pranithk zoldar: it does lazy upgrade upon executing some command that needs new version....
12:07 pranithk zoldar: okay gotta go now... cya
12:07 zoldar cya
12:08 rafi1 joined #gluster-dev
12:13 nishanth joined #gluster-dev
12:18 ndarshan joined #gluster-dev
12:28 atinm ndevos, anekkunt has root caused the crash
12:28 atinm anekkunt, thanks for the quick turn around
12:28 atinm sakshi is gonna send a patch to fix it
12:28 atinm anekkunt++
12:28 glusterbot atinm: anekkunt's karma is now 8
12:30 anekkunt ndevos,  i have root caused it ... sakshi wiil going to send patch for this
12:30 kkeithley1 joined #gluster-dev
12:36 ashiq atinm++ rastar++ thanks :)
12:36 glusterbot ashiq: atinm's karma is now 37
12:36 glusterbot ashiq: rastar's karma is now 16
12:44 kdhananjay1 joined #gluster-dev
12:49 hgowtham Manikandan++
12:49 glusterbot hgowtham: Manikandan's karma is now 34
12:59 Apeksha joined #gluster-dev
13:08 overclk joined #gluster-dev
13:15 kdhananjay joined #gluster-dev
13:20 ppai joined #gluster-dev
13:21 nbalacha joined #gluster-dev
13:33 vimal joined #gluster-dev
13:37 rafi joined #gluster-dev
13:37 kshlm csim, you around?
13:37 kshlm csim, need some help with salt.
13:44 shyam joined #gluster-dev
13:46 csim kshlm: in 10 minutes ?
13:46 kshlm csim, sure.
13:58 pppp joined #gluster-dev
14:02 kshlm csim, never mind. It was my mistake. I was trying to run `salt.highstate` instead of `state.highstate`, and always failed with `salt.highstate` not found.
14:05 rjoseph joined #gluster-dev
14:05 csim kshlm: ok
14:08 overclk joined #gluster-dev
14:09 csim kshlm: beware of salt doc, it is quite confusing :)
14:11 ndevos is the internet broken? my connection seems very slow today
14:11 kshlm csim, Yup. It's really hard to follow. I was in a tutorial one page, and in the middle of a configuration document in the next.
14:14 kkeithley_ ndevos: just ask the NSA and GCHQ to stop reading everything going into and coming out of Belgium and Netherlands
14:16 ndevos kkeithley_: oh, now I wonder if they collect data at my home lan too, connection to my local test systems seems to be affected too
14:19 ndevos maybe this is related? NetworkManager[1267]: <error> [1447940791.210729] [devices/nm-device.c:2617] activation_source_schedule(): (wlp3s0): activation stage already scheduled
14:19 * ndevos disconnects and hopes to be able to connect later again
14:26 hagarth_ joined #gluster-dev
14:40 jiffin joined #gluster-dev
14:54 josferna joined #gluster-dev
14:58 nishanth joined #gluster-dev
15:03 shubhendu joined #gluster-dev
15:05 shyam joined #gluster-dev
15:06 Chr1st1an joined #gluster-dev
15:10 skoduri joined #gluster-dev
15:11 sakshi joined #gluster-dev
15:14 rjoseph joined #gluster-dev
15:55 ndevos kkeithley_: should we describe that when all patches for a bug against mainline have been merged, the bug should get closed/nextrelease?
15:56 ndevos kkeithley_: it would need to get added to http://gluster.readthedocs.org/en/latest/Contributors-Guide/Bug-report-Life-Cycle/
15:57 ndevos and https://public.pad.fsfe.org/p/gluster-automated-bug-workflow
16:07 spalai joined #gluster-dev
16:11 cholcombe joined #gluster-dev
16:16 kdhananjay joined #gluster-dev
16:16 kshlm joined #gluster-dev
16:23 gem_ joined #gluster-dev
16:28 spalai joined #gluster-dev
16:54 rafi joined #gluster-dev
16:57 skoduri joined #gluster-dev
17:14 ggarg joined #gluster-dev
17:21 shaunm joined #gluster-dev
17:28 kkeithley_ ndevos: ...all patches for a bug against mainline have been merged, the bug should get closed/nextrelease?
17:29 kkeithley_ all patches meaning what, exactly?  fixes backported to each branch?  A bug with two or more separate fixes?
17:30 kkeithley_ are you referring to all the bugs I just closed?
17:32 ndevos kkeithley_: a bug that has multiple patches, like we have discussed for the automatic bug status update scripts
17:33 ndevos nbalacha (other tiering folks are offline): do you know about a failure in ./tests/basic/tier/record-metadata-heat.t ?
17:34 ndevos that just happened on http://build.gluster.org/job/rackspace-regression-2GB-triggered/15964/consoleFull
17:34 nbalacha ndevos, that was moved to bad tests recently
17:34 ndevos nbalacha: oh, how recently? I did a rebase ~2 hours ago
17:34 nbalacha ndevos, there was a timing issue that caused a spurious failure IIRC
17:35 nbalacha ndevos, at least a day or 2 ago
17:35 nbalacha ndevos, let me see if it was removed
17:36 kkeithley_ I'm not remembering the auto bug status discussion. (sorry :-/)   bugs fixed in upstream mainline branch would never get ON_QA or VERIFIED. Or do they?  So leaving them open forever seems bad.
17:37 kkeithley_ how would auto bug processing detect that a particular bug has 2+ fixes/patches?
17:37 nbalacha ndevos, http://review.gluster.org/12591
17:38 nbalacha it was moved to bad tests
17:38 ndevos nbalacha: oh, indeed, it is marked as bad, "Ignoring failure from known-bad test ./tests/basic/tier/record-metadata-heat.t"
17:38 ndevos sorry!
17:38 * ndevos scrolls further down
17:38 nbalacha ndevos, np :)
17:38 kkeithley_ If a patch gets merged, I guess the most auto bug processing could do is change to MODIFIED.  Then manually changed to NEXTRELEASE when the final patch is merged?
17:39 kkeithley_ changed to CLOSED/NEXTRELEASE
17:39 ndevos hmm, any quota devs around? tests/bugs/quota/bug-1104692.t would be a spurious failure
17:40 ndevos kkeithley_: yeah, but that change is not documented in our bug life cycle, so we need to add that and update the plan for the scripts
17:41 mchangir|wfh ndevos, any plans today to merge rhel-5 build fix patch downstream?
17:41 kkeithley_ sure.  I think it's logical that an upstream  bug that has its final patch merged should be changed to CLOSED/{NEXTRELEASE,CURRENTRELEASE,$whatever}
17:42 kkeithley_ that's better than leaving them as MODIFIED forever
17:43 ndevos mchangir|wfh: I wanted to check it out, but did not have time for it yet, and it's getting dinner time here
17:43 kkeithley_ mchangir|wfh: need someone to set the Verfied bit before it can be merged.
17:43 mchangir|wfh okay
17:44 mchangir|wfh I don't to take you away from your dinner :)
17:44 kkeithley_ and since it's a downstream patch, this is kinda the wrong place to talk about it. ;-)
17:44 ndevos I was going to test+verify it, hope to be able to do so tomorrow
17:44 mchangir|wfh damn, I never get this chat thing right
17:45 kkeithley_ not a big deal.
17:46 kkeithley_ this is pretty tame compared to some of the real bone-headed mistakes I've made in my life. ;-)
17:46 mchangir|wfh ndevos, no worries
17:46 kkeithley_ well, I need a coffee IV drip to recover from ethics, insider trading, and security training. biab
17:47 mchangir|wfh IV drip eh!
17:47 * ndevos logs off for the day, cya tomorrow!
17:50 kkeithley_ do you know a better way to get lots of caffeine into my system in a hurry? ;-)
17:51 kkeithley_ okay, not really an IV drip, but you get the idea
17:52 kotreshhr joined #gluster-dev
17:54 hagarth_ kkeithley_: a cyringe shot (all at once?) ;)
17:56 kotreshhr1 joined #gluster-dev
17:56 kotreshhr1 left #gluster-dev
18:08 dlambrig_ joined #gluster-dev
18:15 pranithk joined #gluster-dev
18:15 pranithk zoldar: hey! found anything?
18:15 pranithk zoldar: are things working fine?
18:18 EinstCrazy joined #gluster-dev
18:24 pranithk zoldar: guess we shall talk tomorrow :-)
18:31 Chr1st1an joined #gluster-dev
19:18 rjoseph joined #gluster-dev
19:41 rafi joined #gluster-dev
19:49 wushudoin joined #gluster-dev
21:11 EinstCrazy joined #gluster-dev
21:22 dlambrig_ joined #gluster-dev
21:41 dlambrig_ joined #gluster-dev
21:45 hagarth_ joined #gluster-dev
21:50 _Bryan_ joined #gluster-dev
21:53 cholcombe joined #gluster-dev
22:19 xavih joined #gluster-dev
23:38 zhangjn joined #gluster-dev
23:39 zhangjn joined #gluster-dev
23:40 zhangjn joined #gluster-dev
23:43 dlambrig_ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary