Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 jcastillo joined #gluster
00:07 badone joined #gluster
00:37 dgandhi joined #gluster
00:48 krink joined #gluster
01:13 haomaiwa_ joined #gluster
01:48 gem joined #gluster
02:15 DV joined #gluster
02:15 nangthang joined #gluster
02:21 glusterbot News from newglusterbugs: [Bug 1233025] GlusterFS 3.7.3 tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1233025>
02:39 haomaiwa_ joined #gluster
02:41 jcastill1 joined #gluster
02:46 jcastillo joined #gluster
03:05 skoduri joined #gluster
03:10 TheSeven joined #gluster
03:21 haomaiwa_ joined #gluster
03:26 spalai joined #gluster
03:38 Lee1092 joined #gluster
03:41 krink joined #gluster
03:56 harish joined #gluster
04:05 pppp joined #gluster
04:07 nangthang joined #gluster
04:47 gem joined #gluster
05:11 spalai joined #gluster
05:28 maveric_amitc_ joined #gluster
05:29 gem joined #gluster
05:48 anrao joined #gluster
05:53 kotreshhr joined #gluster
06:14 pppp joined #gluster
06:16 nangthang joined #gluster
06:17 kotreshhr left #gluster
06:21 Lee1092 joined #gluster
06:23 spalai joined #gluster
06:59 nishanth joined #gluster
06:59 uebera|| joined #gluster
06:59 LebedevRI joined #gluster
07:29 vimal joined #gluster
07:39 anrao joined #gluster
08:05 gem joined #gluster
08:06 skoduri joined #gluster
08:54 jwd joined #gluster
08:57 jwaibel joined #gluster
09:07 Lee- joined #gluster
09:35 TrincaTwik joined #gluster
09:35 jcastill1 joined #gluster
09:35 kovshenin joined #gluster
09:40 jcastillo joined #gluster
09:45 nsoffer joined #gluster
10:51 cleong joined #gluster
10:58 maZtah_ joined #gluster
10:59 Pintomatic_ joined #gluster
11:00 mrErikss1n joined #gluster
11:00 Asmadeus_ joined #gluster
11:00 twx_ joined #gluster
11:01 side_con1rol joined #gluster
11:01 bcicen_ joined #gluster
11:04 n-st_ joined #gluster
11:04 prg3_ joined #gluster
11:05 yosafbridge` joined #gluster
11:12 doekia joined #gluster
11:12 XpineX joined #gluster
11:12 rwheeler joined #gluster
11:14 lezo joined #gluster
11:15 Lee1092 joined #gluster
11:21 tdasilva joined #gluster
11:24 and` joined #gluster
11:27 samikshan joined #gluster
11:52 jwd joined #gluster
11:57 ira joined #gluster
12:57 social joined #gluster
13:02 harish joined #gluster
13:06 skoduri joined #gluster
13:25 DV__ joined #gluster
13:38 victori joined #gluster
13:39 nsoffer joined #gluster
14:13 haomaiwa_ joined #gluster
14:30 DV joined #gluster
14:31 haomaiwa_ joined #gluster
14:40 haomaiwang joined #gluster
14:45 nixpanic joined #gluster
14:45 nixpanic joined #gluster
14:47 haomaiwang joined #gluster
14:53 haomaiwa_ joined #gluster
15:13 nangthang joined #gluster
15:33 merlink joined #gluster
15:41 DV joined #gluster
16:00 nangthang joined #gluster
16:13 spalai joined #gluster
16:14 chirino joined #gluster
16:16 DV__ joined #gluster
17:23 merlink joined #gluster
17:58 maveric_amitc_ joined #gluster
18:04 spalai left #gluster
18:51 Philambdo joined #gluster
19:05 jwd joined #gluster
19:06 jwaibel joined #gluster
19:08 Intensity joined #gluster
19:33 Twistedgrim joined #gluster
19:54 cleong joined #gluster
20:07 nsoffer joined #gluster
20:41 jwd joined #gluster
20:42 scubacuda joined #gluster
20:46 dastar joined #gluster
20:56 dtrainor joined #gluster
20:58 dtrainor Hi.  I had a brick fail in a distributed-replicate volume.  Seems that this happens almost regularly (maybe I should choose better hardware?) yet I still forget how to swap it out, and when I research, there's still no simple "replace the brick" method.  The best I can find is JoeJulian's blog, https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ and another one, http://gluster-users.gluster.narkive.com/pqUYxRCv/official-word-
20:58 dtrainor on-gluster-replace-brick-for-3-6
20:59 dtrainor Understanding that I'll have no redundancy, is there a way to force the volume to start so that it's usable, albeit in a degraded state?
21:00 dtrainor is 'volume start ... force' too dangerous?
21:14 TheCthulhu2 joined #gluster
21:17 uebera|| joined #gluster
21:21 JoeJulian Sure, you can do that.
21:34 side_control joined #gluster
21:53 badone joined #gluster
22:27 glusterbot News from newglusterbugs: [Bug 1122395] man or info page of gluster needs to be updated with self-heal commands. <https://bugzilla.redhat.com/show_bug.cgi?id=1122395>
22:27 glusterbot News from newglusterbugs: [Bug 1165010] Regression TestFrameWork : Starting a process fails with "Port already in use" error in our regression test framework <https://bugzilla.redhat.com/show_bug.cgi?id=1165010>
22:27 glusterbot News from newglusterbugs: [Bug 1145911] [SNAPSHOT]: Deletion of a snapshot in a volume or system fails if some operation which acquires the volume lock comes in between. <https://bugzilla.redhat.com/show_bug.cgi?id=1145911>
22:27 glusterbot News from newglusterbugs: [Bug 1131447] [Dist-geo-rep] : Session folders does not sync after a peer probe to new node. <https://bugzilla.redhat.com/show_bug.cgi?id=1131447>
22:27 glusterbot News from newglusterbugs: [Bug 1193767] [Quota] : gluster quota list does not show proper output if executed within few seconds of glusterd restart <https://bugzilla.redhat.com/show_bug.cgi?id=1193767>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1111060] [SNAPSHOT] : glusterd fails to update file-system type for brick which is present in other node. <https://bugzilla.redhat.com/show_bug.cgi?id=1111060>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1204604] [Data-tiering] :  Tiering error during configure even if tiering is disabled. <https://bugzilla.redhat.com/show_bug.cgi?id=1204604>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1218593] ec test spurious failures <https://bugzilla.redhat.com/show_bug.cgi?id=1218593>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1002945] Tracking an effort to convert the listed test cases to standard regression test format. <https://bugzilla.redhat.com/show_bug.cgi?id=1002945>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1087203] [SNAPSHOT]: gluster snapshot config should only accept the decimal numeric value <https://bugzilla.redhat.com/show_bug.cgi?id=1087203>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1087677] [SNAPSHOT]: snapshot config <vol> is to list the config, it should not acquire the volume lock to list. <https://bugzilla.redhat.com/show_bug.cgi?id=1087677>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1098122] [SNAPSHOT]: Setting greater than allowed snap-max-hard-limit output needs to have space in between <https://bugzilla.redhat.com/show_bug.cgi?id=1098122>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1098487] [SNAPSHOT] : restoring a snapshot is setting the "features.barrier" option to "enable" <https://bugzilla.redhat.com/show_bug.cgi?id=1098487>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1105415] [SNAPSHOT]: Auto-delete should be user configurable <https://bugzilla.redhat.com/show_bug.cgi?id=1105415>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1106406] [SNAPSHOT]: glusterd log prints "invalid snap command" warning for every successful snapshot status <https://bugzilla.redhat.com/show_bug.cgi?id=1106406>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1145090] [SNAPSHOT]: If the snapshoted brick has xfs options set as part of its creation, they are not automount upon reboot <https://bugzilla.redhat.com/show_bug.cgi?id=1145090>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1145095] [SNAPSHOT]: snapshot create fails with error in log "Failed to open directory <xyz>, due to many open files" <https://bugzilla.redhat.com/show_bug.cgi?id=1145095>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1145189] Fix for spurious failure <https://bugzilla.redhat.com/show_bug.cgi?id=1145189>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1145450] Fix for spurious failure <https://bugzilla.redhat.com/show_bug.cgi?id=1145450>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1146479] [SNAPSHOT]: Need logging correction during the lookup failure case. <https://bugzilla.redhat.com/show_bug.cgi?id=1146479>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1151933] Quota: features.quota-deem-statfs is "on" even after disabling quota. <https://bugzilla.redhat.com/show_bug.cgi?id=1151933>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1161015] [USS]: snapd process is not killed once the glusterd comes back <https://bugzilla.redhat.com/show_bug.cgi?id=1161015>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1175732] [SNAPSHOT]: nouuid is appended for every snapshoted brick which causes duplication if the original brick has already nouuid <https://bugzilla.redhat.com/show_bug.cgi?id=1175732>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1230693] [geo-rep]: RENAME are not synced to slave when quota is enabled. <https://bugzilla.redhat.com/show_bug.cgi?id=1230693>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1218243] quota/marker: turn off inode quotas by default <https://bugzilla.redhat.com/show_bug.cgi?id=1218243>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1101483] [SNAPSHOT]: Snapshot restore fails if quota.conf is present and quota.cksum is not present <https://bugzilla.redhat.com/show_bug.cgi?id=1101483>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1145086] [SNAPSHOT]: Snapshot of volume with thick provisioned LV as bricks does not give proper error message <https://bugzilla.redhat.com/show_bug.cgi?id=1145086>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1218170] [Quota] : To have a separate quota.conf file for inode quota. <https://bugzilla.redhat.com/show_bug.cgi?id=1218170>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1104642] [SNAPSHOT]: Snapshot config set is not updated on a node which had glusterd offline and than rebooted <https://bugzilla.redhat.com/show_bug.cgi?id=1104642>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1122399] [SNAPSHOT]: man or info page of gluster needs to be updated with snapshot commands <https://bugzilla.redhat.com/show_bug.cgi?id=1122399>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1145069] [SNAPSHOT]: man or info page of gluster needs to be updated with snapshot commands <https://bugzilla.redhat.com/show_bug.cgi?id=1145069>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1145084] [SNAPSHOT]: man or info page of gluster needs to be updated with snapshot commands <https://bugzilla.redhat.com/show_bug.cgi?id=1145084>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1202436] [SNAPSHOT]: After a volume which has quota enabled is restored to a snap, attaching another node to the cluster is not successful <https://bugzilla.redhat.com/show_bug.cgi?id=1202436>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1096700] [SNAPSHOT]: Quorum check should not be made for snapshot status command. <https://bugzilla.redhat.com/show_bug.cgi?id=1096700>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1113476] [SNAPSHOT] : gluster volume info should not show the value which is not set explicitly <https://bugzilla.redhat.com/show_bug.cgi?id=1113476>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1133426] [RFE] Add confirmation dialog to to snapshot restore operation <https://bugzilla.redhat.com/show_bug.cgi?id=1133426>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1145020] [SNAPSHOT] : gluster volume info should not show the value which is not set explicitly <https://bugzilla.redhat.com/show_bug.cgi?id=1145020>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1145092] [RFE] Add confirmation dialog to to snapshot restore operation <https://bugzilla.redhat.com/show_bug.cgi?id=1145092>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1155042] [USS] : don't display the snapshots which are not activated <https://bugzilla.redhat.com/show_bug.cgi?id=1155042>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1170921] [SNAPSHOT]: snapshot should be deactivated by default when created <https://bugzilla.redhat.com/show_bug.cgi?id=1170921>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1213364] [RFE] Quota: Make "quota-deem-statfs" option ON, by default, when quota is enabled. <https://bugzilla.redhat.com/show_bug.cgi?id=1213364>
22:29 glusterbot News from resolvedglusterbugs: [Bug 1226224] [RFE] Quota: Make "quota-deem-statfs" option ON, by default, when quota is enabled. <https://bugzilla.redhat.com/show_bug.cgi?id=1226224>

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary