Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 dgbaley joined #gluster
00:04 dgbaley Hey. I'm having trouble getting non-root fuse mounts. I don't think "option rpc-auth-allow-insecure on" is working. Is it supposed to go within the "volume management ... end-volume" block? Yes, I've restarted since I added the option; rebooted the whole cluster even.
00:07 dgbaley This is 3.7.2 btw
00:27 hflai joined #gluster
00:45 dgandhi1 joined #gluster
00:53 PatNarcisoZzZ joined #gluster
01:08 harish joined #gluster
01:24 dgandhi joined #gluster
01:29 scubacuda joined #gluster
01:50 lyang0 joined #gluster
01:51 harish joined #gluster
02:06 nangthang joined #gluster
02:28 kdhananjay joined #gluster
02:42 shaunm_ joined #gluster
03:07 maveric_amitc_ joined #gluster
03:30 bharata-rao joined #gluster
03:32 TheSeven joined #gluster
03:32 sripathi joined #gluster
03:42 nishanth joined #gluster
03:43 atinm joined #gluster
03:58 gem joined #gluster
04:01 itisravi joined #gluster
04:06 raghug joined #gluster
04:11 dgbaley Well, I've convinced myself that the insecure ports are working... Getting this though when trying to launch a vm through libvirt/qemu/libgfapi/tcp:
04:11 dgbaley E [MSGID: 104024] [glfs-mgmt.c:738:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with remote-host: ss-61 (Permission denied) [Permission denied]
04:12 dgbaley Been trying to figure out where errno is stil in glfs-mgmt.c
04:12 dgbaley On the glusterd side all I see in the log is that the client disconnects
04:12 dgbaley s/stil/set/
04:12 glusterbot What dgbaley meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
04:15 sakshi joined #gluster
04:21 shubhendu joined #gluster
04:23 yazhini joined #gluster
04:26 nbalacha joined #gluster
04:27 kshlm joined #gluster
04:36 meghanam joined #gluster
04:42 nbalacha joined #gluster
04:45 ppai joined #gluster
04:49 kanagaraj joined #gluster
04:49 ramteid joined #gluster
04:53 spandit joined #gluster
04:55 ndarshan joined #gluster
05:00 PatNarcisoZzZ joined #gluster
05:00 atalur joined #gluster
05:04 kotreshhr joined #gluster
05:05 deepakcs joined #gluster
05:05 pppp joined #gluster
05:09 aaronott joined #gluster
05:09 vmallika joined #gluster
05:10 RameshN joined #gluster
05:16 smohan joined #gluster
05:19 shubhendu joined #gluster
05:23 vimal joined #gluster
05:26 dusmant joined #gluster
05:30 rafi joined #gluster
05:32 scubacuda joined #gluster
05:33 Bhaskarakiran joined #gluster
05:39 Manikandan joined #gluster
05:42 ashiq joined #gluster
05:43 Philambdo joined #gluster
05:50 dusmant joined #gluster
05:53 SOLDIERz joined #gluster
05:55 anmol joined #gluster
05:56 RameshN joined #gluster
05:56 Saravana_ joined #gluster
05:56 jiffin joined #gluster
05:57 rjoseph joined #gluster
06:03 raghu joined #gluster
06:05 Bhaskarakiran joined #gluster
06:05 overclk joined #gluster
06:07 hagarth joined #gluster
06:11 kdhananjay joined #gluster
06:23 atalur joined #gluster
06:24 karnan joined #gluster
06:24 jtux joined #gluster
06:27 spalai joined #gluster
06:35 kdhananjay joined #gluster
06:41 Pupeno joined #gluster
06:44 nangthang joined #gluster
06:50 nishanth joined #gluster
06:51 dusmant joined #gluster
06:53 cornusammonis joined #gluster
06:59 nsoffer joined #gluster
07:07 smohan|afk joined #gluster
07:08 ramky joined #gluster
07:16 shubhendu joined #gluster
07:16 SOLDIERz joined #gluster
07:20 ramkrsna joined #gluster
07:20 ramkrsna joined #gluster
07:22 nangthang joined #gluster
07:30 Slashman joined #gluster
07:34 Manikandan joined #gluster
07:41 vimal joined #gluster
07:41 sripathi1 joined #gluster
07:42 [Enrico] joined #gluster
07:45 LebedevRI joined #gluster
07:45 fsimonce joined #gluster
07:45 ppai joined #gluster
07:49 vimal joined #gluster
07:52 nbalacha joined #gluster
07:54 natarej joined #gluster
07:54 Manikandan joined #gluster
08:00 al joined #gluster
08:03 gem joined #gluster
08:06 glusterbot News from resolvedglusterbugs: [Bug 1112518] [FEAT/RFE] "gluster volume restart" cli option <https://bugzilla.redhat.com/show_bug.cgi?id=1112518>
08:16 MrAbaddon joined #gluster
08:20 gem joined #gluster
08:30 soumya joined #gluster
08:36 glusterbot News from newglusterbugs: [Bug 1240210] Metadata self-heal is not handling failures while heal properly <https://bugzilla.redhat.com/show_bug.cgi?id=1240210>
08:39 ctria joined #gluster
08:47 dusmant joined #gluster
08:48 elico joined #gluster
08:49 social joined #gluster
08:57 jcastill1 joined #gluster
09:01 curratore joined #gluster
09:02 jcastillo joined #gluster
09:03 _shaps_ joined #gluster
09:04 soumya_ joined #gluster
09:06 glusterbot News from newglusterbugs: [Bug 1240218] Scrubber log should mark file corrupted message as Alert not as information <https://bugzilla.redhat.com/show_bug.cgi?id=1240218>
09:06 glusterbot News from newglusterbugs: [Bug 1240219] Scrubber log should mark file corrupted message as Alert not as information <https://bugzilla.redhat.com/show_bug.cgi?id=1240219>
09:14 gem joined #gluster
09:17 MrAbaddon joined #gluster
09:21 meghanam joined #gluster
09:22 dusmant joined #gluster
09:25 kdhananjay joined #gluster
09:34 Trefex joined #gluster
09:36 vimal joined #gluster
09:42 kotreshhr1 joined #gluster
09:43 spandit joined #gluster
09:46 harish joined #gluster
09:46 dusmant joined #gluster
09:50 kaushal_ joined #gluster
09:53 PatNarcisoZzZ joined #gluster
09:59 meghanam joined #gluster
09:59 gem joined #gluster
10:04 spandit joined #gluster
10:04 atinm joined #gluster
10:05 shubhendu joined #gluster
10:06 glusterbot News from newglusterbugs: [Bug 1178031] [SNAPSHOT]: fails to create on thin LV with LUKS layer in between <https://bugzilla.redhat.com/show_bug.cgi?id=1178031>
10:16 kotreshhr joined #gluster
10:19 kovshenin joined #gluster
10:20 soumya_ joined #gluster
10:21 Bosse_ left #gluster
10:23 maveric_amitc_ joined #gluster
10:24 voleatech joined #gluster
10:25 voleatech Hi can anyone help me with an issue I am having with KVM and gluster where I experience kernel freezes of the VM in combination with a replicated gluster share
10:36 kotreshhr1 joined #gluster
10:39 kotreshhr joined #gluster
10:41 atinm joined #gluster
10:42 Manikandan joined #gluster
10:42 Manikandan_ joined #gluster
10:46 soumya_ joined #gluster
10:46 shubhendu joined #gluster
10:50 gem_ joined #gluster
10:51 spalai joined #gluster
10:55 karnan joined #gluster
10:56 jcastill1 joined #gluster
10:57 dusmant joined #gluster
11:01 jcastillo joined #gluster
11:03 PatNarcisoZzZ joined #gluster
11:05 vmallika joined #gluster
11:20 kshlm joined #gluster
11:33 kotreshhr joined #gluster
11:34 kovshenin joined #gluster
11:35 spalai joined #gluster
11:35 atalur joined #gluster
11:37 glusterbot News from resolvedglusterbugs: [Bug 1209843] [Backup]: Crash observed when multiple sessions were created for the same volume <https://bugzilla.redhat.com/show_bug.cgi?id=1209843>
11:39 anmol joined #gluster
11:46 spalai1 joined #gluster
12:01 unclemarc joined #gluster
12:05 rajeshj joined #gluster
12:09 sage joined #gluster
12:13 jtux joined #gluster
12:14 Manikandan joined #gluster
12:18 jrm16020 joined #gluster
12:20 ppai joined #gluster
12:21 jrm16020 joined #gluster
12:23 [Enrico] joined #gluster
12:25 shubhendu joined #gluster
12:27 kotreshhr joined #gluster
12:28 itisravi_ joined #gluster
12:29 soumya_ joined #gluster
12:44 khanku joined #gluster
12:44 pppp joined #gluster
12:51 rwheeler joined #gluster
12:55 dusmant joined #gluster
12:57 mribeirodantas joined #gluster
13:03 rwheeler_ joined #gluster
13:03 julim joined #gluster
13:07 glusterbot News from newglusterbugs: [Bug 1240284] Disperse volume: NFS crashed <https://bugzilla.redhat.com/show_bug.cgi?id=1240284>
13:07 glusterbot News from resolvedglusterbugs: [Bug 1232983] Disperse volume : fuse mount hung on renames on a distributed disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1232983>
13:10 tanuck joined #gluster
13:22 B21956 joined #gluster
13:24 nsoffer joined #gluster
13:25 coredump joined #gluster
13:25 kovshenin joined #gluster
13:32 georgeh-LT2 joined #gluster
13:34 maZtah joined #gluster
13:36 kovshenin joined #gluster
13:37 unclemarc joined #gluster
13:43 jobewan joined #gluster
13:48 PatNarcisoZzZ joined #gluster
13:49 shyam joined #gluster
13:49 Manikandan joined #gluster
13:51 bfoster joined #gluster
13:52 jmarley joined #gluster
13:52 plarsen joined #gluster
13:53 kotreshhr left #gluster
13:55 kovshenin joined #gluster
13:57 Lee- joined #gluster
14:07 glusterbot News from newglusterbugs: [Bug 1233025] GlusterFS 3.7.3 tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1233025>
14:07 glusterbot News from resolvedglusterbugs: [Bug 1223634] glusterd could crash in remove-brick-status when local remove-brick process has just completed <https://bugzilla.redhat.com/show_bug.cgi?id=1223634>
14:07 glusterbot News from resolvedglusterbugs: [Bug 1225318] glusterd could crash in remove-brick-status when local remove-brick process has just completed <https://bugzilla.redhat.com/show_bug.cgi?id=1225318>
14:07 glusterbot News from resolvedglusterbugs: [Bug 1222065] GlusterD fills the logs when the NFS-server is disabled <https://bugzilla.redhat.com/show_bug.cgi?id=1222065>
14:21 wushudoin joined #gluster
14:22 theron joined #gluster
14:30 lpabon joined #gluster
14:31 ira joined #gluster
14:35 dgandhi joined #gluster
14:42 spalai joined #gluster
14:55 shubhendu joined #gluster
14:55 wkf joined #gluster
14:56 Jitendra joined #gluster
14:57 Jitendra joined #gluster
14:58 shyam joined #gluster
15:04 nsoffer joined #gluster
15:12 DV joined #gluster
15:15 haomaiwa_ joined #gluster
15:15 neofob left #gluster
15:15 hagarth joined #gluster
15:18 raghug joined #gluster
15:21 wehde joined #gluster
15:22 wehde can anyone tell me how to get the uuid of the storage pool for glusterfs
15:23 wehde Do volume ID's ever change in gluster?
15:29 jbrooks joined #gluster
15:30 autoditac joined #gluster
15:33 shyam joined #gluster
15:37 wehde how can i find my gluster spUUID and the sdUUID?
15:37 spalai joined #gluster
15:37 shyam joined #gluster
15:38 shubhendu joined #gluster
15:40 alexandregomes joined #gluster
15:42 alexandregomes joined #gluster
15:42 Vortac Any performance differences using ext vs xfs?
15:46 cholcombe joined #gluster
15:46 shyam joined #gluster
15:49 mator Vortac, no, only best practices
15:52 Saravana_ joined #gluster
16:00 Vortac mator: Thanks
16:01 m0zes I've seen minor performance differences, but it generally depends on the workload. in general, I've found that XFS performs better with more simultaneous readers/writers to the filesystem.
16:01 m0zes I've also foun xfs to be much slower to delete files, though.
16:02 nsoffer joined #gluster
16:03 Vortac m0zes: Thanks.. I just bumped into Bugzilla #883905 on an older kernel (Centos 6.4 Final) where I'm doing a bit of gluster testing.. so just made me start to wonder and research a bit..
16:04 jcastill1 joined #gluster
16:04 Vortac Been doing some large copies into gluster which take a while (repl 2 - 2 node gluster) so been wondering how to speed that up too..
16:06 raghug joined #gluster
16:08 glusterbot News from resolvedglusterbugs: [Bug 1175735] [USS]: snapd process is not killed once the glusterd comes back <https://bugzilla.redhat.com/show_bug.cgi?id=1175735>
16:08 glusterbot News from resolvedglusterbugs: [Bug 1161015] [USS]: snapd process is not killed once the glusterd comes back <https://bugzilla.redhat.com/show_bug.cgi?id=1161015>
16:09 jcastillo joined #gluster
16:10 arthurh joined #gluster
16:17 marcoceppi joined #gluster
16:22 jdossey joined #gluster
16:36 wushudoin joined #gluster
16:40 shyam joined #gluster
16:40 RameshN joined #gluster
16:44 cyberswat joined #gluster
16:50 calavera joined #gluster
16:51 Scub joined #gluster
16:57 theron_ joined #gluster
16:57 Gill joined #gluster
16:58 PatNarcisoZzZ joined #gluster
17:06 calavera joined #gluster
17:13 calavera joined #gluster
17:21 Gill joined #gluster
17:26 calavera joined #gluster
17:34 Rapture joined #gluster
17:37 wushudoin| joined #gluster
17:38 rafi joined #gluster
17:40 Gill joined #gluster
17:43 wushudoin| joined #gluster
17:43 autoditac joined #gluster
17:46 jbautista- joined #gluster
17:53 ToMiles joined #gluster
17:59 spalai left #gluster
18:03 arthurh_ joined #gluster
18:04 jbautista- joined #gluster
18:10 ToMiles if replication and redunacy isn't important for a volume, since we want to provide shared scratch/working volume only used during number crushing on cluster, am I correct distributed is still preferred to striped when it comes to performance?
18:12 ToMiles work load will be mostly sequential reads and writes
18:12 mckaymatt joined #gluster
18:12 theron joined #gluster
18:14 arthurh joined #gluster
18:17 Gill joined #gluster
18:27 DV joined #gluster
18:31 nsoffer joined #gluster
18:51 mckaymatt joined #gluster
18:56 wushudoin| joined #gluster
19:02 wushudoin| joined #gluster
19:02 jmills joined #gluster
19:06 MrAbaddon joined #gluster
19:07 Pupeno joined #gluster
19:10 rotbeard joined #gluster
19:13 jmills is there any feature matrix comparing 3.5 3.6 and 3.7?
19:13 jmills running 3.5 for sometime with no issues, if/when we upgrade currious to know what version we should move to
19:16 nage joined #gluster
19:22 jiffin joined #gluster
19:34 spalai joined #gluster
19:41 l0uis jmills: I'm in a similar boat. 3.5 has been very solid for us. The easiest place I've found is to look at the release notes: https://github.com/gluster/glusterfs/tree/release-3.7/doc/release-notes
19:41 l0uis change thebranch to release-3.6 to see the 3.6 notes.
19:41 l0uis jmills: in general though, when 3.8 comes out 3.5 won't be supported anymore, so the question becomes where to upgrade to, not if IMO
19:41 jmills heh
19:42 jmills thanks for the info!
19:43 l0uis jmills: not sure if the # of bugs fixed in 3.7.2 is a good or bad thing :)
19:43 jmills is their any "production" build or recommendation for most stable, other than just the 3 releases out.
19:45 l0uis not that i know o
19:45 l0uis f
19:46 theron_ joined #gluster
19:49 jmills @l0uis just currious on your plans for upgrading, you think you'll go from 3.5 to 3.7?
19:49 jmills help @
19:49 l0uis jmills: i will probably go to 3.6, but i haven't heally thought hard about it.
19:51 jiffin joined #gluster
19:51 jmills l0uis:thanks
19:58 Vortac joined #gluster
19:59 nsoffer joined #gluster
20:10 DV joined #gluster
20:11 cyberswat joined #gluster
20:17 jbautista- joined #gluster
20:23 jbautista- joined #gluster
20:37 mckaymatt joined #gluster
20:42 aaronott joined #gluster
21:00 B21956 joined #gluster
21:05 calavera joined #gluster
21:07 arthurh joined #gluster
21:07 calavera joined #gluster
21:44 calavera joined #gluster
22:05 aaronott joined #gluster
22:05 Pupeno_ joined #gluster
22:16 jmills left #gluster
22:26 Gill joined #gluster
22:29 Pupeno joined #gluster
23:00 calavera joined #gluster
23:21 Gill_ joined #gluster
23:24 TheCthulhu1 joined #gluster
23:24 plarsen joined #gluster
23:28 PatNarcisoZzZ joined #gluster
23:29 davidbitton joined #gluster
23:33 Gill__ joined #gluster
23:39 MugginsM joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary