Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 gildub joined #gluster
00:15 javi404 joined #gluster
00:17 julim joined #gluster
01:01 topshare joined #gluster
01:09 edong23 joined #gluster
01:15 glusterbot News from newglusterbugs: [Bug 1023638] Change location of volume configuration files <https://bugzilla.redhat.com/show_bug.cgi?id=1023638>
01:15 glusterbot News from newglusterbugs: [Bug 1116782] Please add runtime option to show compile time configuration <https://bugzilla.redhat.com/show_bug.cgi?id=1116782>
01:15 glusterbot News from newglusterbugs: [Bug 1138567] Disabling Epoll (using poll()) - fails SSL tests on mount point <https://bugzilla.redhat.com/show_bug.cgi?id=1138567>
01:15 glusterbot News from newglusterbugs: [Bug 1181500] Brick process on FreeBSD crashes when mounting with a 3.4 Linux client <https://bugzilla.redhat.com/show_bug.cgi?id=1181500>
01:15 glusterbot News from newglusterbugs: [Bug 1195053] glusterd 3.6.2 failing on FreeBSD; ImportError: cannot import name ENODATA <https://bugzilla.redhat.com/show_bug.cgi?id=1195053>
01:15 glusterbot News from newglusterbugs: [Bug 1113907] AFR: Inconsistent GlusterNFS behavior v/s GlusterFUSE during metadata split brain on directories <https://bugzilla.redhat.com/show_bug.cgi?id=1113907>
01:15 glusterbot News from newglusterbugs: [Bug 1133820] distribute fails to heal grand parent quota xattrs <https://bugzilla.redhat.com/show_bug.cgi?id=1133820>
01:15 glusterbot News from newglusterbugs: [Bug 1176011] Client sees duplicated files <https://bugzilla.redhat.com/show_bug.cgi?id=1176011>
01:15 glusterbot News from newglusterbugs: [Bug 764245] [FEAT] glusterfs requires CAP_SYS_ADMIN capability for "trusted" extended attributes - container unfriendly <https://bugzilla.redhat.com/show_bug.cgi?id=764245>
01:15 glusterbot News from newglusterbugs: [Bug 858732] glusterd does not start anymore on one node <https://bugzilla.redhat.com/show_bug.cgi?id=858732>
01:15 glusterbot News from newglusterbugs: [Bug 987188] Through client none of the 'debug.*' options work <https://bugzilla.redhat.com/show_bug.cgi?id=987188>
01:15 glusterbot News from newglusterbugs: [Bug 987239] gluster cli edits replicate volume options even when the volume type is non replicate <https://bugzilla.redhat.com/show_bug.cgi?id=987239>
01:15 glusterbot News from newglusterbugs: [Bug 1120570] glustershd high memory usage on FreeBSD <https://bugzilla.redhat.com/show_bug.cgi?id=1120570>
01:15 glusterbot News from newglusterbugs: [Bug 1128525] gluster volume status provides N/A ouput NFS server and Self-heal Daemon <https://bugzilla.redhat.com/show_bug.cgi?id=1128525>
01:15 glusterbot News from newglusterbugs: [Bug 1155181] Lots of compilation warnings on OSX.  We should probably fix them. <https://bugzilla.redhat.com/show_bug.cgi?id=1155181>
01:15 glusterbot News from newglusterbugs: [Bug 1165140] hardcoded gsyncd path causes geo-replication to fail on non-redhat systems <https://bugzilla.redhat.com/show_bug.cgi?id=1165140>
01:15 glusterbot News from newglusterbugs: [Bug 1165142] hardcoded gsyncd path causes geo-replication to fail on non-redhat systems <https://bugzilla.redhat.com/show_bug.cgi?id=1165142>
01:16 glusterbot News from newglusterbugs: [Bug 1135358] Update licensing and move all MacFUSE references to OSXFUSE <https://bugzilla.redhat.com/show_bug.cgi?id=1135358>
01:16 glusterbot News from newglusterbugs: [Bug 1162905] hardcoded gsyncd path causes geo-replication to fail on non-redhat systems <https://bugzilla.redhat.com/show_bug.cgi?id=1162905>
01:16 glusterbot News from resolvedglusterbugs: [Bug 995784] Spurious log messages on initial start from glusterd <https://bugzilla.redhat.com/show_bug.cgi?id=995784>
01:16 glusterbot News from resolvedglusterbugs: [Bug 772808] get_mem function support for platforms other than linux <https://bugzilla.redhat.com/show_bug.cgi?id=772808>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1040348] mount.glusterfs needs cleanup and requires option validation using getopt <https://bugzilla.redhat.com/show_bug.cgi?id=1040348>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1090807] Remove autogenerated xdr routines and coroutines <https://bugzilla.redhat.com/show_bug.cgi?id=1090807>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1128820] Unable to ls -l NFS mount from OSX 10.9 client on pool created with stripe <https://bugzilla.redhat.com/show_bug.cgi?id=1128820>
01:16 glusterbot News from resolvedglusterbugs: [Bug 766040] Add rdma attributes as configurable options <https://bugzilla.redhat.com/show_bug.cgi?id=766040>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763121] scale-n-defrag fails to work as expected in 3.0.5 <https://bugzilla.redhat.com/show_bug.cgi?id=763121>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763665] Segfault while expansion of volume from distributed mirror <https://bugzilla.redhat.com/show_bug.cgi?id=763665>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763666] Server segfault with 3.1 platform ISO during volume expansion <https://bugzilla.redhat.com/show_bug.cgi?id=763666>
01:16 glusterbot News from resolvedglusterbugs: [Bug 762755] Server crashes on Gentoo when 2.0.2 clients try to connect <https://bugzilla.redhat.com/show_bug.cgi?id=762755>
01:16 glusterbot News from resolvedglusterbugs: [Bug 762350] CIFS volumes used space shows twice the value from linux on windows <https://bugzilla.redhat.com/show_bug.cgi?id=762350>
01:16 glusterbot News from resolvedglusterbugs: [Bug 762984] smbtorture on glusterfs renders the client mountpoint return (Stale NFS File Handle) <https://bugzilla.redhat.com/show_bug.cgi?id=762984>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763652] Start and stop glusterd fails to start a previously created volume <https://bugzilla.redhat.com/show_bug.cgi?id=763652>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1039643] mount option "backupvolfile-server" causes mount to fail with "Invalid argument" <https://bugzilla.redhat.com/show_bug.cgi?id=1039643>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763122] RPC errors from VMware ESX mounts and showmount hangs on 10gige <https://bugzilla.redhat.com/show_bug.cgi?id=763122>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763235] segfault in distribute during failover testing <https://bugzilla.redhat.com/show_bug.cgi?id=763235>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763704] xcs get doesn't work with gNFS <https://bugzilla.redhat.com/show_bug.cgi?id=763704>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763871] [3.1.1qa9] After failover NFS writes to certain files but hangs for few. <https://bugzilla.redhat.com/show_bug.cgi?id=763871>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763651] segfault while stopping and starting volume again <https://bugzilla.redhat.com/show_bug.cgi?id=763651>
01:16 glusterbot News from resolvedglusterbugs: [Bug 762601] replicate crash in selfheal <https://bugzilla.redhat.com/show_bug.cgi?id=762601>
01:16 glusterbot News from resolvedglusterbugs: [Bug 762614] directory traversing problem (client crash) <https://bugzilla.redhat.com/show_bug.cgi?id=762614>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1146263] Initial Georeplication fails to use correct GID on folders ONLY <https://bugzilla.redhat.com/show_bug.cgi?id=1146263>
01:16 glusterbot News from resolvedglusterbugs: [Bug 764550] Permission problems with gluster NFS works with native FUSE <https://bugzilla.redhat.com/show_bug.cgi?id=764550>
01:16 glusterbot News from resolvedglusterbugs: [Bug 765565] uuid_types.h should be autogenerated per architecture just like e2fsprogs <https://bugzilla.redhat.com/show_bug.cgi?id=765565>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1075182] Despite glusterd init script now starting before netfs, netfs fails to mount localhost glusterfs shares in RHS 2.1 <https://bugzilla.redhat.com/show_bug.cgi?id=1075182>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1105337] SSH works, but push-pem fails to connect (host unreachable) <https://bugzilla.redhat.com/show_bug.cgi?id=1105337>
01:16 glusterbot News from resolvedglusterbugs: [Bug 765146] None of nfs cli volume set options work <https://bugzilla.redhat.com/show_bug.cgi?id=765146>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763656] If a node is out of disk space "volume is created" but no proper error message <https://bugzilla.redhat.com/show_bug.cgi?id=763656>
01:16 glusterbot News from resolvedglusterbugs: [Bug 839768] firefox-10.0.4-1.el5_8-x86_64 hang when rendering pages on glusterfs client <https://bugzilla.redhat.com/show_bug.cgi?id=839768>
01:16 glusterbot News from resolvedglusterbugs: [Bug 764661] Pathetic message upon a umount request <https://bugzilla.redhat.com/show_bug.cgi?id=764661>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1130307] MacOSX/Darwin port <https://bugzilla.redhat.com/show_bug.cgi?id=1130307>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1130308] FreeBSD port for GlusterFS <https://bugzilla.redhat.com/show_bug.cgi?id=1130308>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1144163] regression tests fail on osx due to delay - requires explicit ``sleep `` <https://bugzilla.redhat.com/show_bug.cgi?id=1144163>
01:16 glusterbot News from resolvedglusterbugs: [Bug 761930] glusterfs log print "TLA Revision" tag, remove it and reflect git <https://bugzilla.redhat.com/show_bug.cgi?id=761930>
01:16 glusterbot News from resolvedglusterbugs: [Bug 762135] glusterfs-volgen script with raid-0 gives error message <https://bugzilla.redhat.com/show_bug.cgi?id=762135>
01:16 glusterbot News from resolvedglusterbugs: [Bug 765287] Remove ld path hardcoding for non 64bit systems. <https://bugzilla.redhat.com/show_bug.cgi?id=765287>
01:16 glusterbot News from resolvedglusterbugs: [Bug 765520] Fix a typo in min value check in io-cache <https://bugzilla.redhat.com/show_bug.cgi?id=765520>
01:16 glusterbot News from resolvedglusterbugs: [Bug 765529] fusermount build fails on Fedora 15 with new glibc 2.14 <https://bugzilla.redhat.com/show_bug.cgi?id=765529>
01:16 glusterbot News from resolvedglusterbugs: [Bug 765531] Add support for heuristics on free inodes in distribute file scheduling <https://bugzilla.redhat.com/show_bug.cgi?id=765531>
01:16 glusterbot News from resolvedglusterbugs: [Bug 765541] Put an effort to end the spell mistakes and typos <https://bugzilla.redhat.com/show_bug.cgi?id=765541>
01:16 glusterbot News from resolvedglusterbugs: [Bug 919916] glusterd compilation failure on OSX due to AT_SYMLINK_NOFOLLOW <https://bugzilla.redhat.com/show_bug.cgi?id=919916>
01:16 glusterbot News from resolvedglusterbugs: [Bug 986429] Backupvolfile server option should work internal to GlusterFS framework <https://bugzilla.redhat.com/show_bug.cgi?id=986429>
01:16 glusterbot News from resolvedglusterbugs: [Bug 990330] geo-replication fails for longer fqdn's <https://bugzilla.redhat.com/show_bug.cgi?id=990330>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1017993] gluster processes call call_bail() at high frequency resulting in high CPU utilization <https://bugzilla.redhat.com/show_bug.cgi?id=1017993>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1031328] Gluster man pages are out of date. <https://bugzilla.redhat.com/show_bug.cgi?id=1031328>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1089172] MacOSX/Darwin port <https://bugzilla.redhat.com/show_bug.cgi?id=1089172>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1091600] ctdb start post hook should set ping-timeout to respectable value <https://bugzilla.redhat.com/show_bug.cgi?id=1091600>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1095525] GlusterFS MacOSX FUSE loop on symlink <https://bugzilla.redhat.com/show_bug.cgi?id=1095525>
01:16 glusterbot News from resolvedglusterbugs: [Bug 987240] NUFA options 'lookup-unhashed' and 'local-volume-name' are not visible through cli <https://bugzilla.redhat.com/show_bug.cgi?id=987240>
01:16 glusterbot News from resolvedglusterbugs: [Bug 762983] ping_pong tests make client go segfault after bailout <https://bugzilla.redhat.com/show_bug.cgi?id=762983>
01:16 glusterbot News from resolvedglusterbugs: [Bug 1068776] Sharing RHS volume subdirectory via Samba causes error messages in log.ctdb <https://bugzilla.redhat.com/show_bug.cgi?id=1068776>
01:16 glusterbot News from resolvedglusterbugs: [Bug 764826] [FEAT] Allow bandwidth limiting or bandwidth shaping with geo-rep configuration <https://bugzilla.redhat.com/show_bug.cgi?id=764826>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763924] add-brick and remove-brick changes order of subvolumes <https://bugzilla.redhat.com/show_bug.cgi?id=763924>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763653] Cli still asks for wrong confirmation even if the volume is stopped <https://bugzilla.redhat.com/show_bug.cgi?id=763653>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763654] Volume not present wrong message displayed on command line <https://bugzilla.redhat.com/show_bug.cgi?id=763654>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763836] [3.1.1qa5]: network.ping-timeout set failed <https://bugzilla.redhat.com/show_bug.cgi?id=763836>
01:16 glusterbot News from resolvedglusterbugs: [Bug 763998] stale file handle errors after volume expansion and half way through rebalance <https://bugzilla.redhat.com/show_bug.cgi?id=763998>
01:17 glusterbot News from resolvedglusterbugs: [Bug 765337] Client complains on non existent server running on port 24008 <https://bugzilla.redhat.com/show_bug.cgi?id=765337>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762165] Posix conformance test failed on 3.0.0pre2 (Dec 3) release <https://bugzilla.redhat.com/show_bug.cgi?id=762165>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762166] Crash with 3.0.0pre2 on client01 with "metarates" parallel MPI metadata benchmark <https://bugzilla.redhat.com/show_bug.cgi?id=762166>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762201] V3.0 and rsync crash with distribute on server side <https://bugzilla.redhat.com/show_bug.cgi?id=762201>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762305] 3.0.1 doesn't fetch volfume files from the server <https://bugzilla.redhat.com/show_bug.cgi?id=762305>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762631] replicate says came back up 2 times <https://bugzilla.redhat.com/show_bug.cgi?id=762631>
01:17 glusterbot News from resolvedglusterbugs: [Bug 763918] [3.1.1-GA] unknown error 526 <https://bugzilla.redhat.com/show_bug.cgi?id=763918>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762532] Problem on OSX with NFS and CIFS exports <https://bugzilla.redhat.com/show_bug.cgi?id=762532>
01:17 glusterbot News from resolvedglusterbugs: [Bug 761769] mount command should mimic existing practice <https://bugzilla.redhat.com/show_bug.cgi?id=761769>
01:17 glusterbot News from resolvedglusterbugs: [Bug 761772] Various compile-time and run-time issues on FreeBSD <https://bugzilla.redhat.com/show_bug.cgi?id=761772>
01:17 glusterbot News from resolvedglusterbugs: [Bug 761773] FreeBSD and Linux crash (by moving write-behind location in config) <https://bugzilla.redhat.com/show_bug.cgi?id=761773>
01:17 glusterbot News from resolvedglusterbugs: [Bug 761776] Server crashes when making *write* <https://bugzilla.redhat.com/show_bug.cgi?id=761776>
01:17 glusterbot News from resolvedglusterbugs: [Bug 761784] patch to avoid infinite loop on ARM <https://bugzilla.redhat.com/show_bug.cgi?id=761784>
01:17 glusterbot News from resolvedglusterbugs: [Bug 761789] install error for unprivileged user <https://bugzilla.redhat.com/show_bug.cgi?id=761789>
01:17 glusterbot News from resolvedglusterbugs: [Bug 761822] Add glusterfs support for libvirt and ovirt <https://bugzilla.redhat.com/show_bug.cgi?id=761822>
01:17 glusterbot News from resolvedglusterbugs: [Bug 761828] booster shared library is not build properly <https://bugzilla.redhat.com/show_bug.cgi?id=761828>
01:17 glusterbot News from resolvedglusterbugs: [Bug 761832] GlusterSP 2.1 <https://bugzilla.redhat.com/show_bug.cgi?id=761832>
01:17 glusterbot News from resolvedglusterbugs: [Bug 761867] fdatasync symbol not found <https://bugzilla.redhat.com/show_bug.cgi?id=761867>
01:17 glusterbot News from resolvedglusterbugs: [Bug 761936] mount.glusterfs mounts to incorrect mount point <https://bugzilla.redhat.com/show_bug.cgi?id=761936>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762020] volgen script failed initial tests due to wrong fd definition <https://bugzilla.redhat.com/show_bug.cgi?id=762020>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762070] rpm post uninstall kills glusterfsd process <https://bugzilla.redhat.com/show_bug.cgi?id=762070>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762095] Logrotate doesn't work <https://bugzilla.redhat.com/show_bug.cgi?id=762095>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762098] Infinite loop with centralized logging. <https://bugzilla.redhat.com/show_bug.cgi?id=762098>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762128] Volgen multiple export directory support <https://bugzilla.redhat.com/show_bug.cgi?id=762128>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762142] Typo in gf_proc_dump for attribute_timeout <https://bugzilla.redhat.com/show_bug.cgi?id=762142>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762143] Rewrite volgen using option parser and extend cifs/nfs support <https://bugzilla.redhat.com/show_bug.cgi?id=762143>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762144] Remove deprecated export_dir in print string <https://bugzilla.redhat.com/show_bug.cgi?id=762144>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762145] rpmbuild fails with unpackaged files <https://bugzilla.redhat.com/show_bug.cgi?id=762145>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762153] Make booster configuration honour conf-dir and transport type <https://bugzilla.redhat.com/show_bug.cgi?id=762153>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762167] glusterfs-volgen does not work when glusterfs is installed at a prefix <https://bugzilla.redhat.com/show_bug.cgi?id=762167>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762169] Fix critical argument validation check on Fedora11 systems <https://bugzilla.redhat.com/show_bug.cgi?id=762169>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762174] Default config directory is /usr/local/etc instead of /etc for CentOS RPM build <https://bugzilla.redhat.com/show_bug.cgi?id=762174>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762175] For glusterfs-volgen "open failed" errors: report exactly what file or directory isn't opening <https://bugzilla.redhat.com/show_bug.cgi?id=762175>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762176] glusterfs-volgen documentation is wrong <https://bugzilla.redhat.com/show_bug.cgi?id=762176>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762220] init.d and mount scripts require root privileges for $ make install --prefix=... <https://bugzilla.redhat.com/show_bug.cgi?id=762220>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762248] cache-size should not be hard-coded to 1GB <https://bugzilla.redhat.com/show_bug.cgi?id=762248>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762295] glusterfs-volgen: can't handle multiple network interfaces <https://bugzilla.redhat.com/show_bug.cgi?id=762295>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762341] Add new "conf-dir" option <https://bugzilla.redhat.com/show_bug.cgi?id=762341>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762343] Add quota support to volgen <https://bugzilla.redhat.com/show_bug.cgi?id=762343>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762377] Determine best ratio wherein distribute is truly deterministic <https://bugzilla.redhat.com/show_bug.cgi?id=762377>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762452] Default . decimal delimiter will not work for other locale <https://bugzilla.redhat.com/show_bug.cgi?id=762452>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762500] cache-size might be wrong in glusterfs.vol <https://bugzilla.redhat.com/show_bug.cgi?id=762500>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762525] Infiniband port handling should be dynamic by picking up from active ports <https://bugzilla.redhat.com/show_bug.cgi?id=762525>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762554] Volgen changes supporting NFS <https://bugzilla.redhat.com/show_bug.cgi?id=762554>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762743] glusterfs-volgen server volfile problem with --nfs option <https://bugzilla.redhat.com/show_bug.cgi?id=762743>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762944] Add solaris building manifest <https://bugzilla.redhat.com/show_bug.cgi?id=762944>
01:17 glusterbot News from resolvedglusterbugs: [Bug 763500] libibverbs-devel should be a BuildRequires in rdma sub-package <https://bugzilla.redhat.com/show_bug.cgi?id=763500>
01:17 glusterbot News from resolvedglusterbugs: [Bug 763502] Cleanup rpmbuild <https://bugzilla.redhat.com/show_bug.cgi?id=763502>
01:17 glusterbot News from resolvedglusterbugs: [Bug 763542] qa 40 glusterd path is hardcoded, won't start by default. <https://bugzilla.redhat.com/show_bug.cgi?id=763542>
01:17 glusterbot News from resolvedglusterbugs: [Bug 1049735] Avoid NULL referencing of auth->authops->request_init during RPC init <https://bugzilla.redhat.com/show_bug.cgi?id=1049735>
01:17 glusterbot News from resolvedglusterbugs: [Bug 1081274] clang compilation fixes and other directory restructuring <https://bugzilla.redhat.com/show_bug.cgi?id=1081274>
01:17 glusterbot News from resolvedglusterbugs: [Bug 1133266] remove unused parameter and correctly handle mem alloc failure <https://bugzilla.redhat.com/show_bug.cgi?id=1133266>
01:17 glusterbot News from resolvedglusterbugs: [Bug 763691] Inode ref NULL and segfault in protocol/client <https://bugzilla.redhat.com/show_bug.cgi?id=763691>
01:17 glusterbot News from resolvedglusterbugs: [Bug 763850] [3.1.1qa5] : gluster volume sync doesn't start already started volumes <https://bugzilla.redhat.com/show_bug.cgi?id=763850>
01:17 glusterbot News from resolvedglusterbugs: [Bug 763706] Add glusterfs-core dependency to glusterfs-fuse and glusterfs-rdma rpms <https://bugzilla.redhat.com/show_bug.cgi?id=763706>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762168] Constant lock bailouts with 3.0.0pre2 <https://bugzilla.redhat.com/show_bug.cgi?id=762168>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762210] Add defrag scripts into glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=762210>
01:17 glusterbot News from resolvedglusterbugs: [Bug 762344] RPM query fails on a replicated rootfs <https://bugzilla.redhat.com/show_bug.cgi?id=762344>
01:17 glusterbot News from resolvedglusterbugs: [Bug 763703] diagnostics.client-log-level doesn't work when set to set to TRACE <https://bugzilla.redhat.com/show_bug.cgi?id=763703>
01:17 glusterbot News from resolvedglusterbugs: [Bug 763815] [3.1.1qa5]: After replace-brick NFS portmap registration failed <https://bugzilla.redhat.com/show_bug.cgi?id=763815>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763835] [3.1.1qa5]: Incomplete self-heal <https://bugzilla.redhat.com/show_bug.cgi?id=763835>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763861] [3.1.1qa9]: gluster command line hangs even with glusterd started <https://bugzilla.redhat.com/show_bug.cgi?id=763861>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763917] Add-brick on gluster cli starts gNFS even when the volume is not started yet <https://bugzilla.redhat.com/show_bug.cgi?id=763917>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763925] glusterd lockup recovery. <https://bugzilla.redhat.com/show_bug.cgi?id=763925>
01:18 glusterbot News from resolvedglusterbugs: [Bug 762096] Segfault in io-cache <https://bugzilla.redhat.com/show_bug.cgi?id=762096>
01:18 glusterbot News from resolvedglusterbugs: [Bug 762173] booster NFS rexporting distribute volume doesn't respond <https://bugzilla.redhat.com/show_bug.cgi?id=762173>
01:18 glusterbot News from resolvedglusterbugs: [Bug 762231] booster NFS rexporting mirror volume doesn't work <https://bugzilla.redhat.com/show_bug.cgi?id=762231>
01:18 glusterbot News from resolvedglusterbugs: [Bug 762555] Fix memleak in nfs mount <https://bugzilla.redhat.com/show_bug.cgi?id=762555>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763655] Unknown error 526 when one of the subvolumes for distribute is down <https://bugzilla.redhat.com/show_bug.cgi?id=763655>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763689] Segfault during lookup in nfsbeta rc15 <https://bugzilla.redhat.com/show_bug.cgi?id=763689>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763690] Protocol client segfault in nfs beta <https://bugzilla.redhat.com/show_bug.cgi?id=763690>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763695] NFS server segfault during access_cbk <https://bugzilla.redhat.com/show_bug.cgi?id=763695>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763705] Unnecessary log message "RPC program not available" <https://bugzilla.redhat.com/show_bug.cgi?id=763705>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763899] [3.1.1-GA] Spurious File Handle errors for NFS server <https://bugzilla.redhat.com/show_bug.cgi?id=763899>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763941] [3.1.1-GA] gNFS goes into an unusable state if dns-lookups fail <https://bugzilla.redhat.com/show_bug.cgi?id=763941>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763834] [3.1.1qa5]: Replicate fails with ctdb with failover <https://bugzilla.redhat.com/show_bug.cgi?id=763834>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763926] gluster peer probe on a ip octet value bigger than 255 has no validation <https://bugzilla.redhat.com/show_bug.cgi?id=763926>
01:18 glusterbot News from resolvedglusterbugs: [Bug 762164] Wrong failure message when client and server mismatch with version string <https://bugzilla.redhat.com/show_bug.cgi?id=762164>
01:18 glusterbot News from resolvedglusterbugs: [Bug 762359] authentication doesn't handle hostnames <https://bugzilla.redhat.com/show_bug.cgi?id=762359>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763650] Default Init fails against qa46, should run when any one of the transport is available <https://bugzilla.redhat.com/show_bug.cgi?id=763650>
01:18 glusterbot News from resolvedglusterbugs: [Bug 763816] [3.1.1qa5] : replace-brick fails to migrate data when migration from same hostname <https://bugzilla.redhat.com/show_bug.cgi?id=763816>
01:18 glusterbot News from resolvedglusterbugs: [Bug 764004] symbol errors with new gcc <https://bugzilla.redhat.com/show_bug.cgi?id=764004>
01:18 glusterbot News from resolvedglusterbugs: [Bug 769691] Placeholder for code cleanup in entire glusterfs code base <https://bugzilla.redhat.com/show_bug.cgi?id=769691>
01:18 glusterbot News from resolvedglusterbugs: [Bug 762172] setattr calls keep failing over replicate with latest git 3.0 <https://bugzilla.redhat.com/show_bug.cgi?id=762172>
01:18 glusterbot News from resolvedglusterbugs: [Bug 762253] SPECFS validation fails over distribute + replicate <https://bugzilla.redhat.com/show_bug.cgi?id=762253>
01:29 hflai_ joined #gluster
01:29 bala joined #gluster
01:43 bharata-rao joined #gluster
01:49 harish joined #gluster
01:51 ttaaccoo left #gluster
02:13 bala joined #gluster
02:14 T3 joined #gluster
02:18 nangthang joined #gluster
02:46 haomaiwa_ joined #gluster
02:47 ttaaccoo joined #gluster
02:49 ttaaccoo left #gluster
02:49 kiwnix joined #gluster
03:07 theron joined #gluster
03:09 o5k_ joined #gluster
03:17 Bhaskarakiran joined #gluster
03:19 kdhananjay joined #gluster
03:28 meghanam joined #gluster
03:39 Bhaskarakiran joined #gluster
03:40 spandit joined #gluster
03:51 atinmu joined #gluster
04:00 kanagaraj joined #gluster
04:06 kumar joined #gluster
04:12 maveric_amitc_ joined #gluster
04:18 dusmant joined #gluster
04:24 RameshN joined #gluster
04:24 kdhananjay1 joined #gluster
04:29 kdhananjay joined #gluster
04:30 schandra joined #gluster
04:46 rafi joined #gluster
04:51 ndarshan joined #gluster
04:53 vimal joined #gluster
04:56 kshlm joined #gluster
04:57 bharata-rao joined #gluster
05:00 T3 joined #gluster
05:02 meghanam joined #gluster
05:03 punit_ raghu, did you got any luck ??
05:04 bala joined #gluster
05:19 gem joined #gluster
05:22 jiffin joined #gluster
05:23 anoopcs joined #gluster
05:27 Guest60280 joined #gluster
05:27 lalatenduM joined #gluster
05:28 jiku joined #gluster
05:30 ppai joined #gluster
05:37 ppp joined #gluster
05:39 Manikandan joined #gluster
05:42 ashiq joined #gluster
05:47 dusmant joined #gluster
05:49 ramteid joined #gluster
05:51 nishanth joined #gluster
05:52 karnan joined #gluster
06:00 soumya_ joined #gluster
06:01 R0ok_ joined #gluster
06:04 atalur joined #gluster
06:12 raghu joined #gluster
06:17 glusterbot News from newglusterbugs: [Bug 1163543] Fix regression test spurious failures <https://bugzilla.redhat.com/show_bug.cgi?id=1163543>
06:19 ndarshan joined #gluster
06:24 dusmant joined #gluster
06:29 Philambdo joined #gluster
06:31 overclk joined #gluster
06:37 ppp joined #gluster
06:40 deepakcs joined #gluster
06:49 anrao joined #gluster
06:51 hchiramm joined #gluster
06:57 ppai joined #gluster
06:58 pcaruana joined #gluster
06:59 nshaikh joined #gluster
07:07 nbalacha joined #gluster
07:07 nangthang joined #gluster
07:07 ndarshan joined #gluster
07:10 anil_ joined #gluster
07:13 anrao joined #gluster
07:17 glusterbot News from newglusterbugs: [Bug 1204604] [Data-tiering] :  Tiering error during configure even if tiering is disabled. <https://bugzilla.redhat.com/show_bug.cgi?id=1204604>
07:18 kiwnix joined #gluster
07:20 dusmant joined #gluster
07:21 jtux joined #gluster
07:33 o5k__ joined #gluster
07:35 mbukatov joined #gluster
07:36 o5k_ joined #gluster
07:37 bala joined #gluster
07:38 topshare joined #gluster
07:39 gem joined #gluster
07:41 anrao joined #gluster
07:47 ppai joined #gluster
07:52 hgowtham joined #gluster
07:56 anrao joined #gluster
07:56 topshare_ joined #gluster
07:57 [Enrico] joined #gluster
07:57 topshare_ joined #gluster
08:00 T3 joined #gluster
08:07 Bhaskarakiran joined #gluster
08:08 gem joined #gluster
08:12 hchiramm joined #gluster
08:20 dusmant joined #gluster
08:22 Slashman joined #gluster
08:29 DV joined #gluster
08:29 fsimonce joined #gluster
08:31 o5k_ joined #gluster
08:33 pkoro joined #gluster
08:43 jflf joined #gluster
08:46 deniszh joined #gluster
08:48 gem joined #gluster
08:50 smohan joined #gluster
08:51 hybrid512 joined #gluster
08:53 atalur_ joined #gluster
08:53 [Enrico] joined #gluster
08:55 jflf joined #gluster
08:55 liquidat joined #gluster
08:58 ctria joined #gluster
09:00 kovshenin joined #gluster
09:00 gildub joined #gluster
09:04 hchiramm joined #gluster
09:05 anoopcs joined #gluster
09:07 ppai joined #gluster
09:07 meghanam joined #gluster
09:12 ndarshan joined #gluster
09:16 dusmant joined #gluster
09:18 glusterbot News from newglusterbugs: [Bug 1204636] [SNAPSHOT]: After a volume which has quota enabled is restored to a snap, attaching another node to the cluster is not successful <https://bugzilla.redhat.com/show_bug.cgi?id=1204636>
09:19 bala joined #gluster
09:20 jiffin joined #gluster
09:22 Norky joined #gluster
09:25 Pupeno joined #gluster
09:35 kbyrne joined #gluster
09:36 meghanam joined #gluster
09:44 mkzero joined #gluster
09:47 CP|AFK joined #gluster
09:47 dusmant joined #gluster
09:51 dusmant joined #gluster
09:54 rgustafs joined #gluster
09:56 meghanam joined #gluster
09:56 kanagaraj joined #gluster
09:58 hchiramm joined #gluster
10:03 T0aD joined #gluster
10:10 rjoseph joined #gluster
10:13 dusmant joined #gluster
10:14 bala joined #gluster
10:27 ppai joined #gluster
10:27 hgowtham joined #gluster
10:30 sripathi joined #gluster
10:30 hagarth joined #gluster
10:31 hagarth1 joined #gluster
10:40 Dw_Sn joined #gluster
10:45 o5k_ joined #gluster
10:46 harish joined #gluster
10:48 magbal joined #gluster
10:50 hagarth joined #gluster
10:54 firemanxbr joined #gluster
11:00 deniszh1 joined #gluster
11:09 ira joined #gluster
11:15 kkeithley joined #gluster
11:16 ppai joined #gluster
11:36 hchiramm joined #gluster
11:46 soumya joined #gluster
11:55 anoopcs joined #gluster
12:03 ppai joined #gluster
12:08 sripathi left #gluster
12:09 rjoseph joined #gluster
12:13 nishanth joined #gluster
12:27 chirino joined #gluster
12:29 topshare joined #gluster
12:30 siel joined #gluster
12:30 o5k joined #gluster
12:31 theron joined #gluster
12:31 RicardoSSP joined #gluster
12:31 theron joined #gluster
12:38 LebedevRI joined #gluster
12:41 o5k__ joined #gluster
12:48 Gill joined #gluster
12:48 glusterbot News from newglusterbugs: [Bug 1204727] Maintainin local transaction peer list in op-sm framework <https://bugzilla.redhat.com/show_bug.cgi?id=1204727>
12:48 glusterbot News from newglusterbugs: [Bug 1204735] sys/sysctl.h should not get #included when sysctl() is not called <https://bugzilla.redhat.com/show_bug.cgi?id=1204735>
13:01 B21956 joined #gluster
13:03 vimal joined #gluster
13:10 _Bryan_ joined #gluster
13:17 topshare joined #gluster
13:18 glusterbot News from resolvedglusterbugs: [Bug 1018178] Glusterfs ports conflict with qemu live migration <https://bugzilla.redhat.com/show_bug.cgi?id=1018178>
13:19 bennyturns joined #gluster
13:22 topshare joined #gluster
13:30 dgandhi joined #gluster
13:31 dgandhi joined #gluster
13:32 dgandhi joined #gluster
13:32 georgeh-LT2 joined #gluster
13:33 dgandhi joined #gluster
13:34 dgandhi joined #gluster
13:35 its_pete joined #gluster
13:35 dgandhi joined #gluster
13:36 dgandhi joined #gluster
13:38 dgandhi joined #gluster
13:39 dgandhi joined #gluster
13:39 anrao joined #gluster
13:40 dusmant joined #gluster
13:41 dgandhi joined #gluster
13:41 dgandhi joined #gluster
13:42 _Bryan_ joined #gluster
13:43 dgandhi joined #gluster
13:44 dgandhi joined #gluster
13:45 dgandhi joined #gluster
13:45 its_pete2 joined #gluster
13:57 topshare joined #gluster
14:00 plarsen joined #gluster
14:05 topshare joined #gluster
14:06 topshare joined #gluster
14:06 T3 joined #gluster
14:07 julim joined #gluster
14:11 papamoose joined #gluster
14:11 maveric_amitc_ joined #gluster
14:13 bene2 joined #gluster
14:13 saltlake joined #gluster
14:14 anoopcs joined #gluster
14:16 theron joined #gluster
14:16 wushudoin joined #gluster
14:19 ildefonso joined #gluster
14:21 nbalacha joined #gluster
14:26 lpabon joined #gluster
14:36 jobewan joined #gluster
14:52 jbrooks joined #gluster
15:02 jmarley joined #gluster
15:03 ctria joined #gluster
15:04 lalatenduM joined #gluster
15:11 bala1 joined #gluster
15:11 liquidat joined #gluster
15:16 DV_ joined #gluster
15:22 its_pete joined #gluster
15:25 roost joined #gluster
15:27 o5k__ joined #gluster
15:27 its_pete Question about geo-replication: I have a large amount of data (~2TB of data) to replicate across 3 data centres - is it possible to place the data directly on each node in a way that will make Gluster geo-replication aware of it so that changes/deletes will be replicated?  I'm trying to avoid replicating all the data between the DCs initially in order to keep my data transfer costs down.
15:29 o5k_ joined #gluster
15:31 DV_ joined #gluster
15:34 bennyturns joined #gluster
15:40 virusuy joined #gluster
15:40 virusuy joined #gluster
15:43 magbal joined #gluster
15:46 rjoseph joined #gluster
15:52 DV joined #gluster
16:19 soumya joined #gluster
16:28 T3 joined #gluster
16:31 PeterA joined #gluster
16:49 glusterbot News from newglusterbugs: [Bug 1197185] Brick/glusterfsd crash randomly once a day on a replicated volume <https://bugzilla.redhat.com/show_bug.cgi?id=1197185>
16:53 T3 joined #gluster
17:01 theron joined #gluster
17:06 Rapture joined #gluster
17:21 Alpinist joined #gluster
17:21 nbalacha joined #gluster
17:21 prasanth|mtg joined #gluster
17:48 o5k__ joined #gluster
17:53 ashiq joined #gluster
17:59 o5k__ joined #gluster
18:01 theron joined #gluster
18:01 shaunm joined #gluster
18:02 o5k_ joined #gluster
18:06 o5k__ joined #gluster
18:09 uebera|| joined #gluster
18:09 overclk joined #gluster
18:17 plarsen joined #gluster
18:27 wkf joined #gluster
18:34 kovshenin joined #gluster
18:53 its_pete joined #gluster
18:57 its_pete Is there any way to use geo-replication to keep existing data synced? (ie. I have the same data on a primary server and 2 DR servers and I want to use geo-replication to replicate any changes to that existing data from Primary -> DR1 -> DR2.  Is this possible?)
19:04 coredump joined #gluster
19:04 pjschmitt joined #gluster
19:04 raatti joined #gluster
19:04 nhayashi joined #gluster
19:04 prg3 joined #gluster
19:06 adamaN joined #gluster
19:08 itspete joined #gluster
19:08 adamaN Hi i have a question about cluster.min-free-disk: i have glusterfs 3.3.1 and the minimum free disk is not respected although it was set to 1TB. Any way of setting cluster.min-free-disk separate for each brick?
19:10 anrao joined #gluster
19:10 coredump semiosis: your ppas for gluster 3.4, they moved here https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.4 right?
19:11 JoeJulian its_pete: build a psuedo-cluster locally and have your hostnames resolve to it. Once the geo-rep is done, ship the disks?
19:12 JoeJulian adamaN: min-free-disk is only for preventing the creation of new files on a brick. There's no way to prevent the growth of the file. Also 3.3.1 is absurdly out of date and has a bunch of since-fixed critical bugs.
19:12 JoeJulian coredump: right
19:16 semiosis coredump: yep
19:19 adamaN JOEJulian: Thanks. Some of my bricks are filling up to a 100% and i did not understand why, when there are new once that are not used (just 15%)
19:20 glusterbot News from newglusterbugs: [Bug 1174016] network.compression fails simple '--ioengine=sync' fio test <https://bugzilla.redhat.com/show_bug.cgi?id=1174016>
19:22 shaunm joined #gluster
19:24 its_pete JoeJulian: Thanks. I don't know if that would work in my situation because the fileservers that the data needs to end up on each have multiple SSDs RAIDed together, or is it possible to forklift the volume from one physical disk to another?
19:25 rotbeard joined #gluster
19:26 JoeJulian its_pete: I don't see why you couldn't. It's just copying files along with their extended attributes. You just want to make sure the volume layout is going to match.
19:27 DV joined #gluster
19:27 its_pete JoeJulian: Thanks
19:45 Manikandan joined #gluster
19:45 Manikandan_ joined #gluster
19:45 roost joined #gluster
20:08 siel joined #gluster
20:29 coredump Soo, I have a weird problem with gluster. I have 10 openstack servers that use a 5 node gluster cluster (heh) to share images
20:30 coredump one of the dirs, the glance one, keeps reverting the perms to root:root
20:30 coredump I change, 5 minutes later it's back to root:root
20:30 coredump I even set uid/gid on the volume config
20:30 coredump and it only happens to that volume
20:31 JoeJulian I didn't think it even *could* override the volume config.
20:31 coredump I tried everything, even tools to watch the dir and tell me WHO or WHAT is changing it
20:31 coredump but no luck
20:31 JoeJulian client/brick logs?
20:32 JoeJulian I don't expect anything, just shooting in the dark.
20:32 coredump storage.owner-gid: 119 and storage.owner-uid: 111
20:33 coredump all the users are the same accross all the servers (even if the glance dir is only acessed from the controller)
20:33 JoeJulian What version, and what's your volume layout?
20:34 coredump I tried to put an acl giving the full permissions to the glance user, and changed the mask and everything. seconds later everything reverted. The user acl still there but default mask is reset to r-x
20:34 adamaN left #gluster
20:34 coredump 3.4.2 version
20:35 coredump https://gist.github.com/coredump/6d57b709e7763ca6937b
20:35 coredump this is the info on the volume
20:36 coredump (no comments about 5 servers, I know)
20:41 coredump @JoeJulian ^
20:43 JoeJulian nothing obvious
20:45 JoeJulian There are several memory leaks and one bad crash on rebalance that are fixed in the 3.4 branch post 3.4.2 but nothing about changing ownership.
20:45 JoeJulian is it the mount that's changing, or the file ownership?
20:47 coredump mount point
20:48 coredump like /data/glance
20:48 coredump the files on the dir doesn't change
20:48 coredump buuut since it changes to 755 glance can't create a file there when it's needed
20:50 glusterbot News from newglusterbugs: [Bug 1188184] Tracker bug :  NFS-Ganesha new features support for  3.7. <https://bugzilla.redhat.com/show_bug.cgi?id=1188184>
20:51 JoeJulian something in fstab maybe?
20:53 JoeJulian Hehe, looking at your isitcreepy. I married at the lower edge of creepy according to your app.
20:55 coredump that's the gold standard
20:57 coredump fstab: 10.x1:/glance /data/glance glusterfs defaults,acl,rw 0 2
20:58 _zerick_ joined #gluster
20:59 badone joined #gluster
20:59 JoeJulian How often do you have your puppet/salt/ansible run?
20:59 ndevos coredump: I think there was a bug about the storage.owner-gid settings, where a brick stop/start would reset it, or something
21:00 JoeJulian Oh, yeah. there's another possibility I didn't check. I just looked for patches, not open bugs.
21:01 coredump yeah but I think I never stopped those bricks ever.
21:01 ndevos added a brick?
21:01 coredump JoeJulian: chef runs every 30m, but the commands there are to change the dir to the right perms
21:02 coredump ndevos nope. since installed I only added a volume
21:02 coredump (does that count as adding a brick? no right)
21:02 JoeJulian coredump: what about on your servers? Do you change the brick owner?
21:02 JoeJulian And since it's every time he changes it, not just on rare occasions, something got to be changing it back.
21:03 coredump hmm
21:03 coredump explain brick owner to me
21:03 JoeJulian (and there was a reason I didn't include chef... yuck.) ;)
21:04 ndevos coredump: no, indeed, adding other volumes would not affect it
21:04 JoeJulian On your server: /data/glance should have changed to 119:111 when you changed the owner from the client.
21:06 ndevos I can't find that bug, but it sounds like something else anyway...
21:06 JoeJulian bug 1040275
21:06 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1040275 high, high, ---, vbellur, CLOSED CURRENTRELEASE, Stopping/Starting a Gluster volume resets ownership
21:06 ndevos yes, that one!
21:06 JoeJulian fixed in 3.4.2
21:08 ndevos you mean 3.4.3?
21:09 coredump ok when I change the ownership on the gluster client, the server brick changes uid/gid
21:09 ndevos hmm, v3.4.2qa4 should have it, so yes, fixed in 3.4.2
21:10 ndevos coredump: did it change on all the bricks?
21:10 coredump ohhh
21:10 coredump OHHH
21:10 coredump ok
21:11 coredump chef is changing the /data permussions on run
21:11 coredump since I have 4 chef clients running on different times, that's probably what's causing it
21:11 ndevos haha!
21:11 coredump I meant on the gluster servers
21:11 coredump I was under the impression that the gluster server permissions didn't matter
21:12 * JoeJulian does a victory dance.
21:12 ndevos they do, gluster just passed that info on to the clients, there is no magic :)
21:12 coredump yeah
21:12 coredump now I see that
21:13 coredump I thought the client permissions was some kind of metadata
21:13 coredump guys
21:13 coredump this is the best
21:13 coredump this has been HAUNTING ME
21:13 * ndevos does a wave o o| |o/ /o/
21:13 JoeJulian I'm on a roll today. Every problem is going away quickly and easily. I've got a dozen pull requests in so far today, each of them closes a ticket.
21:13 JoeJulian ... and it's only a quarter past two.
21:13 ndevos and yes, it *is* metadata, but its the same metadata as the local filesystem uses
21:14 coredump what god forsaken timezone are you
21:14 coredump https://gist.github.com/coredump/70d3a3e27d5f0a379713
21:14 coredump GODAMMIT CHEF
21:15 JoeJulian PDT (or MST depending if you mean my home office or where I work)
21:20 plarsen joined #gluster
21:31 elitecoder JoeJulian: I have a theoretical question. If XFS were to lose a file with two gluster nodes in a replication config, would self-healing result in deletion or duplication?
21:35 JoeJulian It would re-replicate, but only if a heal-full was run, or the file was looked up.
21:36 wkf joined #gluster
21:37 elitecoder Ever seen it happen by chance?
21:38 JoeJulian I've seen a similar repair happen, having replaced bricks with new empty drives.
21:39 coredump so, since we are being awesome today, two more questions
21:39 coredump no active sinks for performing self-heal < dafuq
21:39 JoeJulian :D
21:40 coredump 2) can I update from 3.4 to 3.5 without losing data? (I don't think 3.5 is available on precise)
21:41 JoeJulian You can safely upgrade, yes.
21:41 JoeJulian @ppa
21:41 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
21:41 JoeJulian I see precise there.
21:42 coredump oh, there's 3.6 too
21:42 coredump i am scared to death of updating gluster
21:42 coredump so many instances
21:43 JoeJulian I don't recommend 3.6 at this time.
21:45 coredump will I see huge gains on anything updating to 3.5, can I roll back to 3.4, and do I update the clients first then the servers one by one?
21:52 Rapture joined #gluster
21:57 JoeJulian If not 3.5.3, then at least 3.4.6.
21:57 JoeJulian I have had reports of speed improvements with 3.5.
22:01 JoeJulian Here's all the release notes: https://github.com/gluster/glusterfs/tree/release-3.5/doc/release-notes
22:17 o5k_ joined #gluster
22:19 badone_ joined #gluster
22:23 bennyturns joined #gluster
22:23 coredump JoeJulian I read it. I think the thing that catches my attention more is LOGS
22:23 coredump because, man, logs are a mess
22:25 coredump ok I am out of here for today
22:25 JoeJulian later
22:25 coredump Thanks again JoeJulian ndevos for the help
22:25 JoeJulian You're welcome.
22:26 coredump let me know if you ever get around DC and I will buy you guys a beer
22:26 JoeJulian Donate to open-source cancer research instead. Link on my blog.
22:26 coredump linky
22:26 JoeJulian http://joejulian.name
22:26 coredump out http://i.imgur.com/5bOrdrb.gif
22:28 coredump > , instead of offering to owe me a beer, please consider donating to free and open cancer research.
22:28 coredump that's very specific :P
22:30 JoeJulian The button goes to a lab that has the open-source philosophy that I've researched.
22:31 JoeJulian I guess I should say, "by clicking on this button"
22:32 quique joined #gluster
22:32 quique i took a brick out of a replica volume
22:33 JoeJulian why do I feel like this is going to be followed with, "... and threw it out the window."
22:33 quique but i figured out it that the self-healing
22:33 quique didn't finish to the others
22:33 quique when i try go to put it back in
22:33 quique i get
22:33 quique is already part of a volume
22:34 JoeJulian @path or prefix
22:34 glusterbot JoeJulian: http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
22:34 quique that will allow the others to heal from it?
22:35 JoeJulian @brick-order
22:35 glusterbot JoeJulian: I do not know about 'brick-order', but I do know about these similar topics: 'brick order'
22:35 JoeJulian @brick order
22:35 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
22:35 JoeJulian so if you remove a brick, the data's not going to replicate to other bricks.
22:36 JoeJulian It'll sit there waiting around until you replace the brick and replicate back to it.
22:36 quique ok
22:40 bene2 joined #gluster
22:41 T3 joined #gluster
22:44 badone_ joined #gluster
22:53 quique hmmm JoeJulian, i had stopped the volume now when I try to start it it get: volume start: mediaserver: failed: Staging failed on 10.231.195.153. Error: Volume mediaserver already started
22:53 quique but it's not started
22:53 quique staging failed means what?
22:57 JoeJulian I'm looking through the source and it's still not very clear. Check your glusterd logs on 195.153.
22:58 quique ok so restarting glusterd seemed on the node
22:58 quique where i was trying to start it from
22:58 quique seemed to fix that
22:58 pelox joined #gluster
22:58 JoeJulian Oh, good.
22:59 quique gonna try to add the old brick back to the volume now
23:01 quique hmm, @JoeJulian so how do i determine brick order when i'm doing a add-brick?
23:01 quique right now it's gluster2:/mediaserver gluster3:/mediaserver
23:02 quique if i was gluster1:/mediaserver to be first
23:02 quique how would i do that?
23:02 JoeJulian Assuming you removed two bricks before, just adding them both will be fine.
23:02 quique i had three total
23:02 quique replica 3
23:02 JoeJulian Oh, ok.
23:02 JoeJulian Then I'm not sure I understand the problem.
23:03 JoeJulian Tell me the story.
23:04 quique so i had gluster1.domain.com:/mediaserver, gluster2.domain.com:/mediaserver, gluster3.domain.com:mediaserver on a replica 3
23:04 quique i started changing over to use internal dns names
23:05 quique gluster3-int.domain.com:/mediaserver
23:05 quique so i took out gluster3.domain.com
23:05 quique and added gluster3-int.domain
23:05 quique and then out with gluster2.domain
23:05 quique and in the gluster2-int
23:06 quique i didn't realize how much there was
23:06 quique and i took gluster1.domain out
23:06 quique then found there were files missing
23:06 quique so i don't think the self heal had finished repopulating gluster2-int and gluster3-int
23:07 quique so that's why i wanted to put it back in
23:07 JoeJulian Oh! ok. Yeah, for that order won't matter.
23:09 JoeJulian after it's back in, run a heal...full to walk the entire tree to make sure they get replicated.
23:21 quique @JoeJulian in the volume status should there be a port for the seal-heal daemon?
23:21 quique i have N/A
23:27 badone joined #gluster
23:29 JoeJulian no
23:30 quique @JoeJulian is there a way to watch the self-heal process?
23:30 quique is gluster volume heal VOLNAME info
23:30 quique what I want?
23:30 JoeJulian That's all there is, unfortunately.
23:37 quique JoeJulian: when I ls a file that is on gluster1 but not on gluster2 or gluster3 from a client
23:37 quique it hangs
23:37 quique why is that?
23:37 quique shouldn't it be able to find the file on gluster1 now?
23:37 JoeJulian If you did that a lot, your background self-heal queue may be full so now you're blocking waiting for the self-heal to finish.
23:38 JoeJulian The client can heal as well as the self-heal daemon.
23:38 quique so blocking like i need to kill something?
23:39 maveric_amitc_ joined #gluster
23:40 quique @JoeJulian also, does the self-heal need to finish before we can get to those types of files?
23:40 ghenry joined #gluster
23:41 JoeJulian blocking like the fd is blocked waiting on io. Killing something won't really help because now that fd is healing in the foreground and won't close until it's finished.
23:42 Rapture joined #gluster
23:42 JoeJulian You have 16 background self-heal slots. Once those are all busy, files will block until healed.
23:42 JoeJulian Another thing you'll need to look for once the healing is finished is split-brain files.
23:43 JoeJulian Since you took disks down, brought them up, then took others down before the heal was finished, there's a potential for split-brain.
23:43 quique how do i look for those?
23:44 plarsen joined #gluster
23:44 JoeJulian gluster volume heal $vol info split-brain
23:44 JoeJulian That will show a log of entries with timestamps in which split-brain files were encountered.
23:47 quique @JoeJulian: would appache serving those files
23:47 quique have the same blocking effect
23:47 quique at an ls?
23:48 quique filling up the 16 background self-heal slots
23:48 JoeJulian Usually.
23:48 quique so apache should be off
23:48 quique while it self-heals?
23:49 JoeJulian ls is usually configured to do a stat of each file, which performs a lookup which triggers the self-heal check.
23:49 JoeJulian Yeah, probably.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary