Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 side_control joined #gluster
00:19 shyam joined #gluster
00:23 aaronott joined #gluster
00:53 topshare joined #gluster
00:56 night joined #gluster
01:04 davidself joined #gluster
01:09 night I'm testing a gluster setup with two nodes
01:10 night node one runs on top of an lvm consisting of drives with ex4 or exfat, mounted in /usr/share
01:10 night on node two, there is no mount invovled, just pointing to a directory in /usr/share as part of the root partition
01:11 night everything sets up great as a replica setup
01:11 night a few cryptic error messages, but nothing setting off alarm bells
01:11 night but the files fail to copy on a manual full heal command
01:11 night is there any particularly bad juju here?
01:12 night *that would be ext4, rather
01:30 B21956 joined #gluster
01:31 harish joined #gluster
01:47 calavera joined #gluster
01:53 dgbaley joined #gluster
01:55 nangthang joined #gluster
01:59 dgbaley joined #gluster
02:12 overclk joined #gluster
02:15 gildub joined #gluster
02:28 jdossey joined #gluster
02:46 bharata-rao joined #gluster
02:58 maveric_amitc_ joined #gluster
03:21 topshare joined #gluster
03:24 TheSeven joined #gluster
03:28 mahendra joined #gluster
03:30 coredump joined #gluster
03:33 calavera joined #gluster
03:38 shubhendu joined #gluster
03:40 glusterbot News from newglusterbugs: [Bug 1236128] Quota list is not working on tiered volume. <https://bugzilla.redhat.com/show_bug.cgi?id=1236128>
03:44 vmallika joined #gluster
03:48 atinm joined #gluster
03:52 plarsen joined #gluster
04:06 hagarth joined #gluster
04:06 nbalacha joined #gluster
04:07 itisravi joined #gluster
04:08 rafi joined #gluster
04:10 maveric_amitc_ joined #gluster
04:13 sakshi joined #gluster
04:23 topshare joined #gluster
04:26 kanagaraj joined #gluster
04:28 RameshN joined #gluster
04:39 yazhini joined #gluster
04:41 glusterbot News from newglusterbugs: [Bug 960752] Update to 3.4-beta1 kills glusterd <https://bugzilla.redhat.com/show_bug.cgi?id=960752>
04:41 glusterbot News from newglusterbugs: [Bug 975476] "--mode=script" option not shown in help output <https://bugzilla.redhat.com/show_bug.cgi?id=975476>
04:41 glusterbot News from newglusterbugs: [Bug 976129] Upstream Gluster could do with a decent Feature Matrix on the website <https://bugzilla.redhat.com/show_bug.cgi?id=976129>
04:41 glusterbot News from newglusterbugs: [Bug 976948] Complete lack of informative information when volume creation fails due to using wrong IP of a known gluster host <https://bugzilla.redhat.com/show_bug.cgi?id=976948>
04:41 glusterbot News from newglusterbugs: [Bug 978148] Attempting to mount distributed-replicate volume on RHEL 6.4 hangs in upstream 3.4.0 Beta 3 <https://bugzilla.redhat.com/show_bug.cgi?id=978148>
04:41 glusterbot News from newglusterbugs: [Bug 980541] GlusterFS testing framework needs rdma support added <https://bugzilla.redhat.com/show_bug.cgi?id=980541>
04:41 glusterbot News from newglusterbugs: [Bug 981456] RFE: Please create an "initial offline bulk load" tool for data, for GlusterFS <https://bugzilla.redhat.com/show_bug.cgi?id=981456>
04:41 glusterbot News from newglusterbugs: [Bug 1020012] debug-trace.py in GlusterFS git no longer working <https://bugzilla.redhat.com/show_bug.cgi?id=1020012>
04:41 glusterbot News from newglusterbugs: [Bug 1038866] [FEAT] command to rename peer hostname <https://bugzilla.redhat.com/show_bug.cgi?id=1038866>
04:41 glusterbot News from newglusterbugs: [Bug 1049481] Need better GlusterFS log message string when updating host definition <https://bugzilla.redhat.com/show_bug.cgi?id=1049481>
04:41 glusterbot News from newglusterbugs: [Bug 1075417] Spelling mistakes and typos in the glusterfs source <https://bugzilla.redhat.com/show_bug.cgi?id=1075417>
04:41 glusterbot News from newglusterbugs: [Bug 1084175] tests/bugs/bug-861542.t needs to be more robust. It's failing on long hostnames. <https://bugzilla.redhat.com/show_bug.cgi?id=1084175>
04:41 glusterbot News from newglusterbugs: [Bug 1116782] Please add runtime option to show compile time configuration <https://bugzilla.redhat.com/show_bug.cgi?id=1116782>
04:41 glusterbot News from newglusterbugs: [Bug 1128771] 32/64 bit GlusterFS portability <https://bugzilla.redhat.com/show_bug.cgi?id=1128771>
04:41 glusterbot News from newglusterbugs: [Bug 1146985] Patches with "Submitted, Merge Pending " status in GlusterFs gerrit server <https://bugzilla.redhat.com/show_bug.cgi?id=1146985>
04:41 glusterbot News from newglusterbugs: [Bug 1158067] Gluster volume monitor hangs glusterfsd process <https://bugzilla.redhat.com/show_bug.cgi?id=1158067>
04:41 glusterbot News from newglusterbugs: [Bug 1126831] Memory leak in GlusterFs client <https://bugzilla.redhat.com/show_bug.cgi?id=1126831>
04:41 glusterbot News from newglusterbugs: [Bug 922801] Gluster not resolving hosts with IPv6 only lookups <https://bugzilla.redhat.com/show_bug.cgi?id=922801>
04:41 glusterbot News from newglusterbugs: [Bug 1084432] Service fails to restart after 3.4.3 update <https://bugzilla.redhat.com/show_bug.cgi?id=1084432>
04:41 glusterbot News from newglusterbugs: [Bug 1108448] selinux alerts starting glusterd in f20 <https://bugzilla.redhat.com/show_bug.cgi?id=1108448>
04:41 glusterbot News from newglusterbugs: [Bug 1155181] Lots of compilation warnings on OSX.  We should probably fix them. <https://bugzilla.redhat.com/show_bug.cgi?id=1155181>
04:41 glusterbot News from resolvedglusterbugs: [Bug 919898] Bogus missing libtoolize warning from autogen.sh on OSX <https://bugzilla.redhat.com/show_bug.cgi?id=919898>
04:41 glusterbot News from resolvedglusterbugs: [Bug 919953] Namespace clash for TMP_MAX in gluster code on OSX <https://bugzilla.redhat.com/show_bug.cgi?id=919953>
04:41 glusterbot News from resolvedglusterbugs: [Bug 920369] Compilation failure on OSX due to missing pthread.h include <https://bugzilla.redhat.com/show_bug.cgi?id=920369>
04:41 glusterbot News from resolvedglusterbugs: [Bug 920372] Inconsistent ./configure syntax errors due to improperly quoted PKG_CHECK_MODULES parameters <https://bugzilla.redhat.com/show_bug.cgi?id=920372>
04:41 glusterbot News from resolvedglusterbugs: [Bug 921817] autogen should warn if pkg-config missing <https://bugzilla.redhat.com/show_bug.cgi?id=921817>
04:41 glusterbot News from resolvedglusterbugs: [Bug 924891] autogen should warn if tar missing <https://bugzilla.redhat.com/show_bug.cgi?id=924891>
04:41 glusterbot News from resolvedglusterbugs: [Bug 961892] Compilation chain isn't honouring CFLAGS environment variable <https://bugzilla.redhat.com/show_bug.cgi?id=961892>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1018308] GlusterFS installation on CentOS 6.4 fails with "No package rsyslog-mmcount available." <https://bugzilla.redhat.com/show_bug.cgi?id=1018308>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1084485] tests/bugs/bug-963678.t is crashing in Rackspace, generating core file <https://bugzilla.redhat.com/show_bug.cgi?id=1084485>
04:41 glusterbot News from resolvedglusterbugs: [Bug 978205] NFS mount failing for several volumes with 3.4.0 beta3.  Only last one created can be mounted with NFS. <https://bugzilla.redhat.com/show_bug.cgi?id=978205>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1010352] Typo in systemd system definition file <https://bugzilla.redhat.com/show_bug.cgi?id=1010352>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1023667] The Python libgfapi API needs more fops <https://bugzilla.redhat.com/show_bug.cgi?id=1023667>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1026977] [abrt] glusterfs-3git-1.fc19: CThunkObject_dealloc: Process /usr/sbin/glusterfsd was killed by signal 11 (SIGSEGV) <https://bugzilla.redhat.com/show_bug.cgi?id=1026977>
04:41 glusterbot News from resolvedglusterbugs: [Bug 921232] No structured logging in glusterfs-hadoop <https://bugzilla.redhat.com/show_bug.cgi?id=921232>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1064096] The old Python Translator code (not Glupy) should be removed <https://bugzilla.redhat.com/show_bug.cgi?id=1064096>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1069840] GlusterFS rpm compilation fails on CentOS 5.x <https://bugzilla.redhat.com/show_bug.cgi?id=1069840>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1071504] rpmbuild/BUILD directory needs to be created for CentOS 5.x <https://bugzilla.redhat.com/show_bug.cgi?id=1071504>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1073168] The Gluster Test Framework could use some initial sanity checks <https://bugzilla.redhat.com/show_bug.cgi?id=1073168>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1074045] Geo-replication doesn't work on EL5, so rpm packaging of it should be disabled <https://bugzilla.redhat.com/show_bug.cgi?id=1074045>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1084147] tests/bugs/bug-767095.t needs to be more robust.  It's failing on long hostnames. <https://bugzilla.redhat.com/show_bug.cgi?id=1084147>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1084653] tests/bugs/bug-865825.t needs to wait longer for self-heal daemon to start.  It's failing in Rackspace due to this. <https://bugzilla.redhat.com/show_bug.cgi?id=1084653>
04:41 glusterbot News from resolvedglusterbugs: [Bug 976124] "make glusterrpms" errors out for GlusterFS release-3.4 branch on F19 <https://bugzilla.redhat.com/show_bug.cgi?id=976124>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1017176] Until RDMA handling is improved, we should output a warning when using RDMA volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1017176>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1049470] Gluster could do with a useful cli utility for updating host definitions <https://bugzilla.redhat.com/show_bug.cgi?id=1049470>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1108958] run-tests.sh should warn on missing yajl <https://bugzilla.redhat.com/show_bug.cgi?id=1108958>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1077159] Quota hard limit not being enforced consistently <https://bugzilla.redhat.com/show_bug.cgi?id=1077159>
04:41 glusterbot News from resolvedglusterbugs: [Bug 972465] "invalid regex -P ^(?!lib).*.so.*$" when running make glusterrpms on F18 <https://bugzilla.redhat.com/show_bug.cgi?id=972465>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1000019] Bogus dates in RPM changelog <https://bugzilla.redhat.com/show_bug.cgi?id=1000019>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1053670] "compress" option name for over-wire-compression is extremely misleading and should be changed <https://bugzilla.redhat.com/show_bug.cgi?id=1053670>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1018619] All Glupy translators nonfunctional in upstream GlusterFS git master <https://bugzilla.redhat.com/show_bug.cgi?id=1018619>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1146279] Compilation on OSX is broken with upstream git master and release-3.6 branches <https://bugzilla.redhat.com/show_bug.cgi?id=1146279>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1017868] Gluster website needs better instructions for mailing list page <https://bugzilla.redhat.com/show_bug.cgi?id=1017868>
04:41 glusterbot News from resolvedglusterbugs: [Bug 919916] glusterd compilation failure on OSX due to AT_SYMLINK_NOFOLLOW <https://bugzilla.redhat.com/show_bug.cgi?id=919916>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1141659] OSX LaunchDaemon plist file should be org.gluster... instead of com.gluster... <https://bugzilla.redhat.com/show_bug.cgi?id=1141659>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1141665] OSX LaunchDaemon plist file should be org.gluster... instead of com.gluster... <https://bugzilla.redhat.com/show_bug.cgi?id=1141665>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1141682] The extras/MacOSX directory is no longer needed, and should be removed <https://bugzilla.redhat.com/show_bug.cgi?id=1141682>
04:41 glusterbot News from resolvedglusterbugs: [Bug 1141683] The extras/MacOSX directory is no longer needed, and should be removed <https://bugzilla.redhat.com/show_bug.cgi?id=1141683>
04:41 glusterbot News from resolvedglusterbugs: [Bug 922432] Upstream generated spec file references non-existing patches <https://bugzilla.redhat.com/show_bug.cgi?id=922432>
04:42 glusterbot News from resolvedglusterbugs: [Bug 954190] /etc/init.d/glusterfsd missing from upstream git compiled rpms <https://bugzilla.redhat.com/show_bug.cgi?id=954190>
04:42 glusterbot News from resolvedglusterbugs: [Bug 1142045] THANKS message in git repo has typos <https://bugzilla.redhat.com/show_bug.cgi?id=1142045>
04:42 glusterbot News from resolvedglusterbugs: [Bug 1142046] THANKS message in git repo has typos <https://bugzilla.redhat.com/show_bug.cgi?id=1142046>
04:45 hagarth wow, what's this?
04:46 plarsen joined #gluster
04:46 itisravi glusterbot is high :)
04:47 cicero :'(
04:48 kshlm joined #gluster
04:54 jotun joined #gluster
04:54 haomaiwa_ joined #gluster
05:01 ppai joined #gluster
05:04 al joined #gluster
05:08 vimal joined #gluster
05:08 meghanam joined #gluster
05:11 glusterbot News from resolvedglusterbugs: [Bug 1216051] nfs-ganesha: More parameters to be added to ganesha global config file <https://bugzilla.redhat.com/show_bug.cgi?id=1216051>
05:11 stickyboy I need to plan downtime for 3.5.3 -> 3.5.5...
05:11 stickyboy Then I will jump from 3.5.x to 3.7.
05:12 gem joined #gluster
05:15 pppp joined #gluster
05:24 arcolife joined #gluster
05:27 vmallika joined #gluster
05:27 ramteid joined #gluster
05:28 rjoseph joined #gluster
05:30 anil joined #gluster
05:30 anmol joined #gluster
05:32 hchiramm joined #gluster
05:32 dusmant joined #gluster
05:41 deepakcs joined #gluster
05:41 glusterbot News from newglusterbugs: [Bug 1242329] [Quota] : Inode quota spurious failure <https://bugzilla.redhat.com/show_bug.cgi?id=1242329>
05:41 jiffin joined #gluster
05:44 kotreshhr joined #gluster
05:50 Bhaskarakiran joined #gluster
05:50 hgowtham joined #gluster
05:52 Manikandan joined #gluster
05:52 meghanam_ joined #gluster
05:52 ashiq joined #gluster
05:54 maveric_amitc_ joined #gluster
05:57 hagarth joined #gluster
05:58 kdhananjay joined #gluster
05:59 soumya joined #gluster
06:07 raghu joined #gluster
06:07 atalur joined #gluster
06:10 jtux joined #gluster
06:10 owlbot joined #gluster
06:16 shubhendu joined #gluster
06:24 nsoffer joined #gluster
06:32 gildub_ joined #gluster
06:36 XpineX joined #gluster
06:38 kdhananjay joined #gluster
06:42 pcaruana joined #gluster
06:43 mahendra_ joined #gluster
06:44 Pupeno joined #gluster
06:45 Pupeno_ joined #gluster
06:45 dusmant joined #gluster
06:55 mbukatov joined #gluster
07:06 [Enrico] joined #gluster
07:18 rafi joined #gluster
07:38 Trefex joined #gluster
07:44 meghanam_ joined #gluster
07:47 MadMatUK joined #gluster
07:48 kovshenin joined #gluster
07:52 tessier http://pastebin.com/VmaP7B3P
07:52 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
07:52 tessier Trying to get KVM to play nice with gluster. Not having much luck. :|
07:52 Pupeno joined #gluster
07:52 tessier Anyone know why it would be producing this error?
07:56 tessier hmm...I may need to do option rpc-auth-allow-insecure on on each glusterd and restart it...
07:59 tessier Yeay! That seems to have cleared up that error.
08:00 tessier Unfortunately, my VM still won't boot. :(
08:04 ctria joined #gluster
08:07 PatNarcisoZzZ joined #gluster
08:09 kdhananjay joined #gluster
08:13 MadMatUK joined #gluster
08:20 dusmant joined #gluster
08:23 kdhananjay joined #gluster
08:27 gem_ joined #gluster
08:30 fsimonce joined #gluster
08:32 smohan joined #gluster
08:36 R0ok_ joined #gluster
08:45 shubhendu joined #gluster
08:45 dusmant joined #gluster
08:48 overclk raghu, the bad file patch is failing regression runs again. spurious?
08:49 autoditac joined #gluster
08:50 Bhaskarakiran joined #gluster
08:55 freakzz joined #gluster
09:04 arcolife joined #gluster
09:05 arcolife joined #gluster
09:06 meghanam_ joined #gluster
09:15 Arminder joined #gluster
09:15 kbyrne joined #gluster
09:16 Arminder- joined #gluster
09:17 mator joined #gluster
09:19 Arminder- left #gluster
09:27 dusmant joined #gluster
09:30 mbukatov joined #gluster
09:30 meghanam_ joined #gluster
09:34 mbukatov joined #gluster
09:34 ppai joined #gluster
09:40 fsimonce joined #gluster
09:42 glusterbot News from resolvedglusterbugs: [Bug 1188184] NFS-Ganesha new features support for 3.7 <https://bugzilla.redhat.com/show_bug.cgi?id=1188184>
09:45 anoopcs_ joined #gluster
09:45 kshlm joined #gluster
09:52 dusmant joined #gluster
09:54 nsoffer joined #gluster
09:58 vmallika joined #gluster
10:01 necrogami joined #gluster
10:01 MadMatUK joined #gluster
10:05 ira joined #gluster
10:09 Manikandan joined #gluster
10:11 anmol joined #gluster
10:12 prabu joined #gluster
10:13 Lee1092 joined #gluster
10:28 foster joined #gluster
10:36 Philambdo joined #gluster
10:37 LebedevRI joined #gluster
10:43 kshlm joined #gluster
10:47 anmol joined #gluster
10:48 ira joined #gluster
10:56 kshlm joined #gluster
10:59 shubhendu joined #gluster
11:07 karnan joined #gluster
11:12 gildub joined #gluster
11:12 shubhendu joined #gluster
11:15 sadbox joined #gluster
11:17 jcastill1 joined #gluster
11:20 Zaish joined #gluster
11:22 jcastillo joined #gluster
11:23 rafi joined #gluster
11:25 dusmant joined #gluster
11:35 unclemarc joined #gluster
11:36 Manikandan joined #gluster
11:38 firemanxbr joined #gluster
11:42 jcastill1 joined #gluster
11:44 lkoranda joined #gluster
11:47 jcastillo joined #gluster
11:55 natarej joined #gluster
11:56 kotreshhr left #gluster
12:03 Zaish Hey guys
12:07 squaly joined #gluster
12:13 Manikandan joined #gluster
12:15 jtux joined #gluster
12:19 rafi1 joined #gluster
12:21 Bhaskarakiran joined #gluster
12:24 overclk joined #gluster
12:27 itisravi_ joined #gluster
12:28 MadMatUK joined #gluster
12:28 rjoseph joined #gluster
12:31 topshare joined #gluster
12:33 anil joined #gluster
12:36 hagarth joined #gluster
12:37 arcolife joined #gluster
12:38 harish_ joined #gluster
12:42 glusterbot News from newglusterbugs: [Bug 1241621] gfapi+rdma IO errors with large block sizes (Transport endpoint is not connected) <https://bugzilla.redhat.com/show_bug.cgi?id=1241621>
12:47 chirino joined #gluster
12:47 Bhaskarakiran joined #gluster
12:50 mdavidson joined #gluster
12:51 shaunm joined #gluster
12:55 plarsen joined #gluster
12:58 Zaish_ joined #gluster
13:01 julim joined #gluster
13:01 elico joined #gluster
13:03 jcastill1 joined #gluster
13:04 julim_ joined #gluster
13:04 B21956 joined #gluster
13:07 davidbitton joined #gluster
13:10 jmarley joined #gluster
13:12 glusterbot News from newglusterbugs: [Bug 1242504] [Data Tiering]: Frequency Counters of un-selected file in the DB wont get clear after a promotion/demotion cycle <https://bugzilla.redhat.com/show_bug.cgi?id=1242504>
13:12 shaunm joined #gluster
13:20 jcastillo joined #gluster
13:22 nsoffer joined #gluster
13:24 shyam joined #gluster
13:26 sadbox joined #gluster
13:27 georgeh-LT2 joined #gluster
13:28 hamiller joined #gluster
13:28 dgandhi joined #gluster
13:33 DV joined #gluster
13:42 glusterbot News from newglusterbugs: [Bug 1233025] GlusterFS 3.7.3 tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1233025>
13:43 jcastill1 joined #gluster
13:47 victori joined #gluster
13:49 PatNarcisoZzZ joined #gluster
13:49 jcastillo joined #gluster
13:49 bjornar joined #gluster
13:55 jcastill1 joined #gluster
14:00 jcastillo joined #gluster
14:06 jcastill1 joined #gluster
14:10 bene2 joined #gluster
14:11 jcastillo joined #gluster
14:11 side_control joined #gluster
14:13 glusterbot News from newglusterbugs: [Bug 1242536] Data Tiering: Rename of file is not heating up the file <https://bugzilla.redhat.com/show_bug.cgi?id=1242536>
14:13 mpietersen joined #gluster
14:16 lpabon joined #gluster
14:18 arcolife joined #gluster
14:19 rafi joined #gluster
14:30 jbrooks joined #gluster
14:34 julim joined #gluster
14:36 woakes070048 joined #gluster
14:40 jrm16020 joined #gluster
14:41 hchiramm_home joined #gluster
14:43 glusterbot News from resolvedglusterbugs: [Bug 1242546] Peer not recognized after IP address change <https://bugzilla.redhat.com/show_bug.cgi?id=1242546>
14:45 n-st joined #gluster
14:50 atalur joined #gluster
14:53 autoditac joined #gluster
14:54 woakes07004 joined #gluster
14:56 kkeithley joined #gluster
15:00 dblack joined #gluster
15:02 firemanxbr joined #gluster
15:03 n-st joined #gluster
15:05 overclk joined #gluster
15:05 n-st joined #gluster
15:07 nbalacha joined #gluster
15:07 rotbeard joined #gluster
15:10 victori joined #gluster
15:11 mckaymatt joined #gluster
15:13 mpietersen joined #gluster
15:14 firemanxbr joined #gluster
15:15 n-st joined #gluster
15:15 ron-slc joined #gluster
15:18 topshare joined #gluster
15:19 wushudoin| joined #gluster
15:21 vmallika joined #gluster
15:23 BillyBob joined #gluster
15:27 cholcombe joined #gluster
15:36 twx joined #gluster
15:36 jiqiren joined #gluster
15:36 cuqa_ joined #gluster
15:37 Dave joined #gluster
15:37 sac joined #gluster
15:37 sac joined #gluster
15:37 zerick joined #gluster
15:37 vincent_vdk joined #gluster
15:37 m0zes joined #gluster
15:37 oxidane joined #gluster
15:37 d-fence joined #gluster
15:37 Marqin joined #gluster
15:37 coreping joined #gluster
15:37 yoavz joined #gluster
15:37 dastar_ joined #gluster
15:37 Kins joined #gluster
15:37 nhayashi joined #gluster
15:37 Bardack joined #gluster
15:37 rehunted joined #gluster
15:42 mckaymatt joined #gluster
15:42 jrm16020 joined #gluster
15:43 theron joined #gluster
15:47 rafi joined #gluster
15:47 meghanam joined #gluster
15:49 frankS2 joined #gluster
15:50 BillyBob Hello, i am trying to fix my Gluster cluster, I have lost one server that had 2 bricks (I have Number of Bricks: 4 x 2 = 8). I've done the Brick Restoration - Replace Crashed Server process and I still can't get the new bricks in and Gluster to heal itself. At the moment the new server (has the same host name as the dead one) is in State: Accepted
15:50 BillyBob peer request (Connected) on the other members of the cluster.... Any ideas ?)
15:53 ChrisNBlum joined #gluster
15:54 Pintomatic joined #gluster
15:56 BillyBob when I do "gluster volume info" I can see the 8 bricks, but when I do "gluster volume status" the replacement bricks aren't even listed, so I tried to add them using "volume add-brick" but I get an error because "Host is not in 'Peer in Cluster' state"
15:57 billputer joined #gluster
15:58 jdossey joined #gluster
16:02 mpietersen is your replacement server listed in the peer group?
16:03 BillyBob mpietersen: gluster peer status? yes but "State: Sent and Received peer request (Connected)"
16:04 nsoffer joined #gluster
16:04 BillyBob gluster volume info: lists all bricks (even the 2 missing ones) on the 3 working peers
16:04 BillyBob but gluster volume status doesn't list them
16:05 mpietersen BillyBob: you had 4 servers and now you have 3, correct?
16:05 BillyBob and on the replacement server gluster volume status: list it's 2 bricks and says they are offline
16:05 BillyBob yes
16:05 BillyBob mpietersen: yes
16:07 mpietersen are you trying to add new bricks residing on a new host, or on existing members of the peer group
16:08 BillyBob mpietersen: I lost a server and can't get it back, so I am trying to replace it
16:08 BillyBob mpietersen: I've done this: http://www.gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server
16:09 mpietersen so you've done that from your 'replacement server' ?
16:10 BillyBob yes
16:11 BillyBob this is how i created my setup if it help:           gluster volume create new-gluster-volume replica 2 transport tcp \
16:11 BillyBob gluster1:/brick1/share gluster3:/brick1/share \
16:11 BillyBob gluster2:/brick1/share gluster4:/brick1/share \
16:11 BillyBob gluster1:/brick2/share gluster3:/brick2/share \
16:11 BillyBob gluster2:/brick2/share gluster4:/brick2/share
16:11 BillyBob gluster4 is the one i am trying to replace
16:13 mpietersen and you've probed the new peer, correct?
16:14 BillyBob mpietersen: yes but they replacement to gluster4 is "State: Sent and Received peer request (Connected)"
16:14 BillyBob and not peer in cluster
16:14 mpietersen hrm
16:16 BillyBob I've had this happen once, and recreated the entire cluster (thank you ansible). But I need to be able to fix this, if not will be the last time I use Gluster :(
16:19 overclk joined #gluster
16:21 BillyBob I don't understand why this is such a difficult thing,  I must be missing out on something here
16:21 BillyBob surely people are loosing instance all the time
16:22 shubhendu joined #gluster
16:23 calavera joined #gluster
16:24 mpietersen BillyBob: honestly, I'm not sure. I'm still PoC-ing gluster
16:26 hagarth left #gluster
16:26 BillyBob well it's great when it works, but I still haven't been able to replace a peer.
16:30 firemanxbr joined #gluster
16:31 64MADCDKY joined #gluster
16:33 vmallika joined #gluster
16:34 plarsen joined #gluster
16:41 edong23 joined #gluster
16:42 victori_ joined #gluster
16:43 tiemen joined #gluster
16:43 glusterbot News from newglusterbugs: [Bug 1242393] Performance: Impact of Bitrot on I/O Performance <https://bugzilla.redhat.com/show_bug.cgi?id=1242393>
16:46 kshlm joined #gluster
16:50 BillyBob left #gluster
16:51 mlhess Anyone know if a load of 9 a gluster server while writing a 9gb file from a client  is normal?   I am getting really bad performance, on high power hosts
16:51 Rapture joined #gluster
16:54 ekuric joined #gluster
16:57 autoditac joined #gluster
16:58 mpietersen joined #gluster
17:00 overclk joined #gluster
17:02 jiffin joined #gluster
17:07 bennyturns joined #gluster
17:12 JoeJulian mlhess: nope, not normal.
17:13 mlhess JoeJulian any ideas where I would go to start to trouble shoot?
17:14 overclk joined #gluster
17:14 JoeJulian memory, io (disk, storage interface, storage interface driver, network driver... Those off the top of my head.
17:15 JoeJulian I knew a guy that once had a single bad core in a cpu that caused odd behavior like that.
17:19 mlhess JoeJulian top reports 295% cpu and 10.5% memory usage.   Using Iperf, I am getting a 8.3 Gbits/sec.  Using DD on the gluster server I get around 599 MB/s to create a 1G file.   From the client I get.  85MB/s
17:19 mlhess There are 2 servers with one brick each.
17:20 mlhess On dedicated VM's.
17:21 mlhess Gluster 3.7
17:26 nage joined #gluster
17:46 magamo left #gluster
17:58 plarsen joined #gluster
18:01 mpietersen joined #gluster
18:01 soumya joined #gluster
18:09 kkeithley left #gluster
18:12 kkeithley joined #gluster
18:13 glusterbot News from newglusterbugs: [Bug 1242609] replacing a offline brick fails with "replace-brick" command <https://bugzilla.redhat.com/show_bug.cgi?id=1242609>
18:15 jobewan joined #gluster
18:17 mlhess JoeJulian moving to gluster 3.6 fixed the issues and the speed problem
18:19 JoeJulian mlhess: interesting. Which minor version of 3.7 was that?
18:19 mlhess JoeJulian http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/epel-7/x86_64/
18:20 JoeJulian Thanks.
18:20 mlhess JoeJulian used to install 3.6 " yum install http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3/RHEL/epel-7/x86_64/glusterfs-3.6.3-1.el7.x86_64.rpm http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3/RHEL/epel-7/x86_64/glusterfs-api-3.6.3-1.el7.x86_64.rpm http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3/RHEL/epel-7/x86_64/glusterfs-cli-3.6.3-1.el7.x86_64.rpm http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3
18:20 mlhess RHEL/epel-7/x86_64/glusterfs-fuse-3.6.3-1.el7.x86_64.rpm http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3/RHEL/epel-7/x86_64/glusterfs-server-3.6.3-1.el7.x86_64.rpm  http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3/RHEL/epel-7/x86_64/glusterfs-libs-3.6.3-1.el7.x86_64.rpm"
18:20 mlhess and used to install 7 "yum install http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/epel-7/x86_64/glusterfs-3.7.2-3.el7.x86_64.rpm http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/epel-7/x86_64/glusterfs-api-3.7.2-3.el7.x86_64.rpm http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/epel-7/x86_64/glusterfs-cli-3.7.2-3.el7.x86_64.rpm http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/epel-7/x86_64/g
18:20 mlhess usterfs-libs-3.7.2-3.el7.x86_64.rpm http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/epel-7/x86_64/glusterfs-server-3.7.2-3.el7.x86_64.rpm http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/epel-7/x86_64/glusterfs-client-xlators-3.7.2-3.el7.x86_64.rpm  http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/epel-7/x86_64/glusterfs-fuse-3.7.2-3.el7.x86_64.rpm"
18:20 JoeJulian holy cow
18:21 JoeJulian you like to work harder than I do.
18:21 jdossey eek
18:22 jdossey my eyes!
18:22 mlhess Sorry
18:23 mlhess I don't normally use rhel, but I wanted the support from redhat if I needed to pay for it.  If we move this to production, I will get the yum repos setup
18:24 JoeJulian Interesting.
18:26 shyam mlhess: With 3.7 we do use more threads to drive throughput, which could result in increased load, but should have had a +ve impact on the performance, fwiw if you still can, check if server.event-threads set to 1 results in the problem going away on 3.7 (is the problem specific to 3.7.2 if so this may not help, as the setting was present from 3.7.0)
18:27 mlhess I can upgrade again later on today.  I have lots of small files so I would rather be on 3.7
18:33 mckaymatt joined #gluster
18:40 multipower joined #gluster
18:41 multipower hey geeks, can someone please help me... I want to exclude a particular directory in Glusterfs how do I make this work?
18:43 JoeJulian exclude it from what?
18:45 multipower exclude files from sync with the other node
18:46 multipower thanks for your answer Joe
18:46 JoeJulian Don't put them on a replicated volume.
18:46 multipower do i need to edit some configuration in /etc/glusterfs
18:48 JoeJulian Remember, you're not syncing files on bricks, you're using a volume. If you need volumes that do two different things, make two different volumes. Mount them both and use symlinks to achieve what you seem to desire.
18:49 JoeJulian Eventually, you'll be able to define replication levels more granularly, but for now, no.
18:50 multipower thanks Joe...but i have something like /datastore/1/ & /datastore/2/on server1 and server2...if i want to exclude /datastore/2 being synced
18:50 multipower just directory 2 being sycned..is it possible
18:51 JoeJulian I assume you're suggesting that /datastore is a brick?
18:52 multipower Volume Name: datastore
18:52 multipower Type: Replicate
18:52 multipower Status: Started
18:52 multipower Number of Bricks: 2
18:52 multipower Transport-type: tcp
18:52 multipower Bricks:
18:52 multipower something like this..i assume its 2 bricks
18:52 JoeJulian Bricks are storage for GlusterFS, not your system. What I'm suggesting is you have a non-replicated volume that mounts on /datastore/2 and a replicated volume that mounts on /datastore/1.
18:53 JoeJulian btw... please don't paste in channel. ,,(paste)
18:53 glusterbot For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
18:53 JoeJulian Or you can just copy/paste to fpaste.org or similar.
18:53 multipower sorry
18:53 multipower okay my problem is directory 2 is so big...i dont want to sync it..just exclude it
18:54 JoeJulian So you don't want the files in datastore/2 to be available to your entire cluster, just the local machine.
18:54 multipower yes
18:54 multipower u got it
18:55 JoeJulian mv /datastore/2 /local/2 && ln -s /local/2 /datastore
18:57 multipower great suggestion..soft links will not be synced..my problem will arise since i am running out of space in my local disk...i cant move 2tb directory locally..i only have space on /datastore
19:00 JoeJulian ~pasteinfo | multipower
19:00 glusterbot multipower: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:01 multipower @JoeJulian http://fpaste.org/243930/68140691/
19:06 julim joined #gluster
19:06 rafi joined #gluster
19:07 JoeJulian multipower: Ok, assuming the plan is to make /testdir some sort of large storage device, either partition it, or not. Use /testdir/afr and testdir/dht instead. Make your volumes like "gluster volume create replica2 gluster{1,2}:/testdir/afr" and "gluster volume create gluster{1,2}:/testdir/dht"
19:08 JoeJulian multipower: then mount those volumes on server{1,2} as "mount -t glusterfs gluster1:afr /datastore && mount -t glusterfs gluster1:dht /datastore/2"
19:09 nsoffer joined #gluster
19:09 JoeJulian Also, consider using ,,(rrdns) to mount your volumes to allow your servers to operate if one of the gluster servers is down for maintenance.
19:09 glusterbot You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
19:10 Twistedgrim joined #gluster
19:15 rwheeler joined #gluster
19:16 multipower @Joejulian I'll figure out thank you so much !
19:18 mpietersen joined #gluster
19:23 julim joined #gluster
19:27 autoditac joined #gluster
19:35 mpietersen joined #gluster
19:35 mckaymatt joined #gluster
19:43 victori joined #gluster
19:57 jrm16020_ joined #gluster
20:01 mpietersen joined #gluster
20:03 calavera joined #gluster
20:04 mpietersen joined #gluster
20:11 DV joined #gluster
20:16 ira joined #gluster
20:20 calavera joined #gluster
20:25 mjrosenb joined #gluster
20:37 shaunm joined #gluster
20:41 dgbaley joined #gluster
20:48 barnim joined #gluster
20:53 jrm16020 joined #gluster
21:04 jcastill1 joined #gluster
21:04 badone joined #gluster
21:09 jcastillo joined #gluster
21:34 uebera|| joined #gluster
21:38 capri joined #gluster
21:42 obnox joined #gluster
21:53 victori joined #gluster
21:56 Gill joined #gluster
21:59 itsMontoya joined #gluster
21:59 itsMontoya I got an error that all subvolumes online, so it made my mounted disk unavailable
21:59 itsMontoya Is there a way to turn that off?
22:00 Careliyim joined #gluster
22:01 Careliyim Hey geeks,
22:03 Careliyim I just installed glusterfs and am now trying to use it on my 2 webserver however because both server are having different disk spaces I don't want to sync all of the files any options ?
22:13 jblack joined #gluster
22:14 jblack is there a way to use a backed up brick to start up a new gluster?
22:21 jblack perhaps someone can tell me where to find this doc? http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/
22:24 PatNarcisoZzZ joined #gluster
22:33 gildub joined #gluster
22:36 JoeJulian jblack: https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/
22:38 jblack hmm, that seems useful to know, but I may not be explaining well.
22:38 jblack I'm trying to set up a new  gluster cluster,  based off on a snapshot from a previous, no longer existing cluster.
22:39 JoeJulian Are you getting an error?
22:40 jblack Not yet. I'm still in the process of copying the filesystem to a new partition, as the chef cookbook assumes a partitioned drive.
22:41 jblack my hope is to  build a 2 node cluster relatively quickly, with both systems getting a block device that is a snapshot of a volume.
22:42 JoeJulian You'll probably get the ,,(path or prefix) error trying to add the old brick to a new volume.
22:42 glusterbot http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
22:42 jblack back in feb I did a  2 node proof of concept. I saved the EBS volume of one of the two systems.  Now, I'm deploying new clusters, and I want to use that ebs snapshot as a starting point for new deployments, if that makes sense.
22:43 jblack should I retain or toss the .glusterfs dir thats in the brick?
22:43 JoeJulian I'd probably retain it.
22:44 jblack I suppose that once both systems come up, there will be a forced syncronization?
22:45 JoeJulian Should be.
22:45 JoeJulian If not "gluster volume heal $vol full"
22:45 jblack darn. I was hoping starting from identical images would save me that pain.
22:46 JoeJulian *if* they're identical, then you should be fine. If they're snapshots that might be off by a little bit, then something might need healed.
22:47 jblack they're two instances mounting filestems that are generated on demand from the same snapshot
22:47 JoeJulian I'm referring to the point in time that the snapshots were taken.
22:48 jblack right. I'm  using a single snapshot as the source image for the data volume for  both  fileservers.
22:48 JoeJulian In other words, if the volume was healthy when it was snapshotted, it should still be healthy.
22:49 jblack e.g.    I'm using snap-112345   to  generate vol-abcde and vol-12345  with cloudformation and attaching them to   serverA and serverB, respectively
22:50 jblack so vol-abcde and vol-12345 should be identical copies of one another, with one going to server a, and the other going to server b
22:51 jblack is there a very good book on gluster I can read over the next cuople days?
22:51 jblack I'm going through w ww.gluster.org/community/documentation at the moment
23:00 JoeJulian No books that I know of. I don't usually read industry books. They're usually out of date before they've even been printed.
23:01 jblack I think I've been asking the wrong question. I think I should be asking how to start a cluster with pre-existing data.  I'm seeing some ancient mailing list posts taht indicate that if files already exist when the brick is created, that gluster will self heal.
23:02 jblack for a 150 gig filesystem, that's not going to require 150Gig+  of network traffic, is it?
23:28 topshare joined #gluster
23:29 davidbitton joined #gluster
23:40 JoeJulian jblack: nope
23:51 jermudgeon_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary