Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 JoeJulian It is applied to the cluster, yes.
00:02 lyaunzbe Okay, cool, thank you for the clarification!
00:03 lyaunzbe Do people put their bricks on the same server? That seems like it defeats the entire purpose of fault tolerance
00:04 JoeJulian When you have multiple devices on a server, it sometimes fits the use case to replicate each drive between servers, thus having multiple bricks per server.
00:04 JoeJulian Or, perhaps, multiple volumes.
00:05 JoeJulian Or both
00:05 JoeJulian :)
00:06 atrius joined #gluster
00:13 DV joined #gluster
00:32 joseki joined #gluster
00:34 tg2 yep, one brick per disk is not uncommon either
00:34 tg2 you just have to make sure you track your bricks properly for replication
00:38 joseki joined #gluster
00:39 joseki i just added a second nic and want to start using it. It has a different hostname and IP. there seems some stuff around dual-homed configurations, but there were some people that seemed to have trouble moving over. so looking for the most up-to-date info
00:40 DV joined #gluster
00:42 purpleidea joined #gluster
00:49 T3 joined #gluster
00:51 purpleidea joined #gluster
00:51 purpleidea joined #gluster
00:54 Pupeno_ joined #gluster
01:02 ndevos joined #gluster
01:02 ndevos joined #gluster
01:03 brcc joined #gluster
01:04 brcc Hello! I am trying glusterfs and I am just getting 30mb/s write speed (just one local brick). Tried it with ZFS and ext4 (just to make sure it wdidn't have to do with the filsystem). The same local test (not over gluster) returns 100mb/s
01:04 brcc Could it be relatd to fuse? Should I mount it any other way?
01:05 foster joined #gluster
01:11 jmarley joined #gluster
01:14 osiekhan3 joined #gluster
01:16 Pupeno joined #gluster
01:41 DV joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:50 T3 joined #gluster
01:55 haomaiwa_ joined #gluster
02:05 elico joined #gluster
02:25 nangthang joined #gluster
02:29 JoeJulian joseki: if you used ,,(hostnames) it's simply a matter of ensuring the hostname resolves to the ip you want in each place.
02:29 glusterbot joseki: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
02:29 JoeJulian brcc: Could be if you have a really slow cpu or ram.
02:51 T3 joined #gluster
02:56 davidbitton joined #gluster
03:15 brcc JoeJulian: E3-1230 with 32GB RAM
03:21 lyaunzbe joined #gluster
03:26 elico joined #gluster
03:34 elico joined #gluster
03:44 brcc Just moved to NFS and it is really fast
03:44 brcc great
03:45 joseki seems failure to mount at reboot on ubuntu is still not completely fixed. trying to add backupvolfile-server and see if that helps
03:45 joseki also changed /etc/init/mounting-glusterfs.conf
03:49 elico1 joined #gluster
03:51 T3 joined #gluster
03:53 elico joined #gluster
03:58 Pupeno joined #gluster
03:58 Pupeno joined #gluster
04:23 DV joined #gluster
04:41 joseki joined #gluster
04:41 joseki yeah, i guess i'll have to resort to adding mounts in /etc/rc.local. thing just won't start
04:44 gem joined #gluster
04:46 DV joined #gluster
04:49 lezo joined #gluster
04:50 fyxim joined #gluster
04:52 T3 joined #gluster
04:59 wsirc_823 joined #gluster
05:04 DV joined #gluster
05:13 hagarth joined #gluster
05:17 jermudgeon_ joined #gluster
05:24 vimal joined #gluster
05:35 DV joined #gluster
05:46 T3 joined #gluster
06:01 kovshenin joined #gluster
06:02 DV joined #gluster
06:20 vipulnayyar joined #gluster
06:21 DV joined #gluster
06:22 gnudna joined #gluster
06:27 gem joined #gluster
06:35 gnudna left #gluster
07:00 DV joined #gluster
07:01 T3 joined #gluster
07:17 soumya joined #gluster
07:18 nangthang joined #gluster
07:20 lyang0 joined #gluster
07:31 soumya joined #gluster
07:42 vimal joined #gluster
07:46 wsirc_1040 joined #gluster
07:47 wsirc_1040 does any know how to disable native nfs on only one peer/node?
07:55 deniszh joined #gluster
07:56 ndevos wsirc_1040: disabling is done per volume, not per peer... you can kill the 'glusterfs' process with pid /var/lib/glusterfs/nfs/run/nfs.pid (or something like that) if you need to
07:57 ndevos but why do you want to do that?
07:59 o5k joined #gluster
08:01 T3 joined #gluster
08:01 wsirc_1040 trying to get around the esxi, stripe nfs bug
08:02 ndevos did you file a bug for that already?
08:02 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
08:02 wsirc_1040 yes
08:02 wsirc_1040 https://bugzilla.redhat.co​m/show_bug.cgi?id=1208384
08:02 glusterbot Bug 1208384: high, unspecified, ---, rhs-bugs, NEW , NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client
08:03 wsirc_1040 has tcpdumps
08:03 ndevos oh, are you on Red Hat Gluster Storage, not on the community glusterfs packages?
08:05 ndevos could you attach the tcpdump .pcap files there? we would need to look into the nfs protocol with wireshark to see the differences between the success/failure cases
08:05 wsirc_1040 oops I filed it under the wrong product
08:05 fsimonce joined #gluster
08:05 wsirc_1040 I'll swtich
08:14 Slashman joined #gluster
08:16 glusterbot News from newglusterbugs: [Bug 1208384] NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208384>
08:16 glusterbot News from newglusterbugs: [Bug 1208784] Load md-cache on the server <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208784>
08:34 ricky-ti1 joined #gluster
08:40 DV joined #gluster
08:41 wsirc_1040 Bug 1208384, uploaded pcaps
08:41 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1208384 high, unspecified, ---, rhs-bugs, NEW , NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client
08:45 plarsen joined #gluster
09:02 T3 joined #gluster
09:08 plarsen joined #gluster
09:17 ndevos thanks wsirc_1040, I'll be travelling later today, and might be able to check the tcpdumps while in the plane - hopefully there is something obvious and we can fix it soon
09:27 soumya joined #gluster
09:42 hagarth joined #gluster
09:43 wsirc_1040 FYI you can use the rpcbind -w centos 7 bug to stop nfs from working on one peerls -al
09:43 Pupeno joined #gluster
09:44 Pupeno_ joined #gluster
09:45 Pupeno_ joined #gluster
09:51 Pupeno joined #gluster
10:03 T3 joined #gluster
10:09 plarsen joined #gluster
10:12 Prilly joined #gluster
10:13 DV joined #gluster
10:16 glusterbot News from newglusterbugs: [Bug 1208819] mount ec volume through nfs, ls shows no file , but with specified filename, the file can be modified <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208819>
10:17 DV joined #gluster
10:22 Pupeno joined #gluster
10:34 DV_ joined #gluster
10:35 T0aD joined #gluster
10:48 kovshenin joined #gluster
10:54 jiku joined #gluster
11:03 T3 joined #gluster
11:11 DV joined #gluster
11:25 LebedevRI joined #gluster
11:34 DV joined #gluster
11:55 jmarley joined #gluster
11:59 haomaiwa_ joined #gluster
12:00 siel joined #gluster
12:04 T3 joined #gluster
12:22 bene2 joined #gluster
12:24 and` joined #gluster
12:35 and` joined #gluster
12:37 lkoranda joined #gluster
12:38 and` joined #gluster
12:43 and` joined #gluster
12:46 wkf joined #gluster
12:47 glusterbot News from newglusterbugs: [Bug 1170942] More than redundancy bricks down, leads to the persistent write return IO error, then the whole file can not be read/write any longer, even all bricks going up <https://bugzilla.redhat.co​m/show_bug.cgi?id=1170942>
12:55 jmarley joined #gluster
12:56 purpleidea joined #gluster
12:56 purpleidea joined #gluster
13:00 and` joined #gluster
13:05 T3 joined #gluster
13:07 and` joined #gluster
13:07 vimal joined #gluster
13:12 T3 joined #gluster
13:21 hamiller joined #gluster
13:21 RicardoSSP joined #gluster
13:24 dgandhi joined #gluster
13:32 kovshenin joined #gluster
13:39 kbon-ntc joined #gluster
13:40 _Bryan_ joined #gluster
13:48 vipulnayyar joined #gluster
13:59 jmarley joined #gluster
14:09 getup joined #gluster
14:13 Gill joined #gluster
14:17 chirino joined #gluster
14:35 DV joined #gluster
14:35 lyang0 joined #gluster
14:42 lpabon joined #gluster
14:55 roost joined #gluster
14:58 DV joined #gluster
15:00 plarsen joined #gluster
15:02 ckotil I was messing around with some tunings today, and hit this. https://bugzilla.redhat.co​m/show_bug.cgi?id=1162910 are there any workarounds?
15:02 glusterbot Bug 1162910: medium, unspecified, ---, bugs, NEW , mount options no longer valid: noexec, nosuid, noatime
15:02 and` joined #gluster
15:05 ckotil I was able to remount with -o acl
15:07 deniszh joined #gluster
15:14 Gill left #gluster
15:15 virusuy joined #gluster
15:18 gem joined #gluster
15:22 foster joined #gluster
15:31 CyrilPeponnet joined #gluster
15:37 nangthang joined #gluster
15:39 kotreshhr joined #gluster
15:39 Gill joined #gluster
15:51 DV joined #gluster
16:06 kovshenin joined #gluster
16:10 atrius joined #gluster
16:10 vipulnayyar joined #gluster
16:18 mat1010 joined #gluster
16:25 rotbeard joined #gluster
16:29 mat1010 joined #gluster
16:35 plarsen joined #gluster
16:36 soumya joined #gluster
16:43 Prilly joined #gluster
16:43 kdhananjay joined #gluster
16:44 deniszh joined #gluster
16:48 lyaunzbe joined #gluster
17:05 kotreshhr left #gluster
17:06 cicero joined #gluster
17:07 cicero is there any chance we could still get glusterfs-3.3 PPAs?
17:07 cicero looks like the oldest is 3.4
17:12 Rapture joined #gluster
17:14 vipulnayyar joined #gluster
17:16 balacafalata-bil joined #gluster
17:26 JoeJulian semiosis: ^
17:46 Gill joined #gluster
17:49 drue joined #gluster
17:50 drue on gluster3.6 on centos 6, i have lots of eventd logs that are rotating, but not getting removed.. where's the rotation configured?
18:05 JoeJulian eventd logs? Or was that just a typo and you're referring to the logs in /var/log/glusterfs (aka gluster logs or +/bricks: brick logs)?
18:06 JoeJulian Assuming the latter, logrotate handles that. the configs are in /etc/logrotate.d/
18:09 drue ah, i'm confused.. the logs are from an applicaiton and on my gluster file system.. not gluster logs proper
18:09 drue sorry for the confusion
18:14 JoeJulian Gluster does nothing to manage that. That's all application specific.
18:19 Gill joined #gluster
18:20 gem joined #gluster
19:34 Pupeno joined #gluster
20:16 DV joined #gluster
20:24 Pupeno joined #gluster
20:24 Pupeno joined #gluster
20:48 plarsen joined #gluster
20:54 roost joined #gluster
20:58 sonic393 joined #gluster
20:59 sonicx joined #gluster
21:00 _Bryan_ joined #gluster
21:03 92AAAY8HU joined #gluster
21:05 Rapture joined #gluster
21:06 92AAAY8IT joined #gluster
21:09 jackdpeterson joined #gluster
21:30 o5k_ joined #gluster
21:39 ipmango joined #gluster
21:41 purpleidea joined #gluster
21:42 Prilly joined #gluster
22:12 o5k joined #gluster
22:23 lyaunzbe joined #gluster
22:26 T3 joined #gluster
22:30 DV joined #gluster
22:33 DV joined #gluster
22:37 wkf joined #gluster
22:45 atrius joined #gluster
22:55 Peanut joined #gluster
23:05 Kins joined #gluster
23:15 Kins joined #gluster
23:17 T3 joined #gluster
23:19 Kins joined #gluster
23:26 Kins joined #gluster
23:31 Kins joined #gluster
23:50 Peanut joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary