Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 calum_ joined #gluster
00:39 calisto joined #gluster
00:39 Pupeno joined #gluster
00:40 TrDS left #gluster
01:29 calisto joined #gluster
01:33 theron joined #gluster
02:13 calisto1 joined #gluster
03:11 bala joined #gluster
03:22 theron joined #gluster
03:31 Pupeno joined #gluster
03:36 zerick joined #gluster
03:47 bala1 joined #gluster
03:48 n-st joined #gluster
03:49 eclectic joined #gluster
03:54 RobertLaptop joined #gluster
03:54 codex joined #gluster
03:55 codex joined #gluster
03:57 al joined #gluster
04:02 lyang0 joined #gluster
04:09 shubhendu joined #gluster
04:29 bala joined #gluster
04:39 elico joined #gluster
05:21 ur_ joined #gluster
05:23 Lee_ joined #gluster
05:24 johnmark_ joined #gluster
05:25 feeshon_ joined #gluster
05:25 atrius_ joined #gluster
05:25 saltsa_ joined #gluster
05:27 lanning_ joined #gluster
05:28 atoponce joined #gluster
05:31 Pupeno joined #gluster
05:33 hflai joined #gluster
05:36 georgeh joined #gluster
05:39 partner_ joined #gluster
05:39 lyang0 joined #gluster
05:39 m0zes joined #gluster
05:39 tessier_ joined #gluster
05:39 edong23_ joined #gluster
05:39 yoavz joined #gluster
05:39 mikedep333 joined #gluster
05:39 stigchri1tian joined #gluster
05:39 stickyboy joined #gluster
05:39 DJClean joined #gluster
05:39 ryao joined #gluster
05:39 aulait joined #gluster
05:39 [o__o] joined #gluster
05:39 UnwashedMeme joined #gluster
05:39 churnd joined #gluster
05:39 Guest75764 joined #gluster
05:39 rastar_afk joined #gluster
05:39 eryc_ joined #gluster
05:39 nixpanic_ joined #gluster
05:39 CP|AFK joined #gluster
05:39 NuxRo joined #gluster
05:39 atrius` joined #gluster
05:39 vincent_vdk joined #gluster
05:47 ryao joined #gluster
05:48 wgao joined #gluster
05:48 bala joined #gluster
05:48 al joined #gluster
05:48 dgandhi joined #gluster
05:48 cfeller_ joined #gluster
05:48 sadbox joined #gluster
05:48 Telsin joined #gluster
05:48 xrsa joined #gluster
05:48 JordanHackworth joined #gluster
05:48 kalzz joined #gluster
05:48 johndescs joined #gluster
05:48 verdurin joined #gluster
05:48 weykent joined #gluster
05:48 ws2k3 joined #gluster
05:48 johnnytran joined #gluster
05:48 ultrabizweb joined #gluster
05:48 frankS2 joined #gluster
05:48 Bosse joined #gluster
05:48 glusterbot joined #gluster
05:48 masterzen joined #gluster
05:48 tomased joined #gluster
05:49 ws2k3 joined #gluster
05:53 primusinterpares joined #gluster
06:00 primusinterpares joined #gluster
06:00 wgao joined #gluster
06:00 bala joined #gluster
06:00 al joined #gluster
06:00 dgandhi joined #gluster
06:00 cfeller_ joined #gluster
06:00 sadbox joined #gluster
06:00 Telsin joined #gluster
06:00 xrsa joined #gluster
06:00 JordanHackworth joined #gluster
06:00 kalzz joined #gluster
06:00 johndescs joined #gluster
06:00 verdurin joined #gluster
06:00 weykent joined #gluster
06:00 johnnytran joined #gluster
06:00 ultrabizweb joined #gluster
06:00 frankS2 joined #gluster
06:00 Bosse joined #gluster
06:00 glusterbot joined #gluster
06:00 masterzen joined #gluster
06:00 tomased joined #gluster
06:48 primusinterpares joined #gluster
06:51 marcoceppi joined #gluster
06:59 theron joined #gluster
06:59 ctria joined #gluster
07:03 vimal joined #gluster
07:19 zerick joined #gluster
07:24 glusterbot News from newglusterbugs: [Bug 1170075] [RFE] : BitRot detection in glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1170075>
07:28 M28 If I put nginx to serve some files from a domain directly from a gluster mount point, should it just work?
07:28 M28 Is it gonna cache files accessed frequently and all that stuff automatically?
07:33 aulait joined #gluster
07:53 Intensity joined #gluster
07:59 M28_ joined #gluster
08:21 LebedevRI joined #gluster
08:22 zerick joined #gluster
08:48 theron joined #gluster
09:28 TrDS joined #gluster
09:50 elico joined #gluster
09:59 fandi joined #gluster
10:05 M28_ joined #gluster
10:09 gildub joined #gluster
10:25 ghenry joined #gluster
10:30 kovshenin joined #gluster
10:37 theron joined #gluster
10:37 gildub_ joined #gluster
10:48 sac_ joined #gluster
11:17 Pupeno joined #gluster
11:48 Pupeno joined #gluster
11:49 vimal joined #gluster
11:57 y4m4_ joined #gluster
11:58 Pupeno joined #gluster
11:59 y4m4 joined #gluster
12:12 bala joined #gluster
12:14 M28 is 10 ms an acceptable latency for gluster? I want to host nodes in 2 datacenters and that's the latency between them...
12:14 tetreis joined #gluster
12:15 y4m4 joined #gluster
12:15 y4m4 joined #gluster
12:20 Micromus M28: I would think so
12:21 Micromus I dont have experience with gluster, but typically synchronous replication for SANs require sub 50-100ms latency
12:21 M28 yeah just wanna make sure that it doesn't expect sub-ms latencies
12:21 Micromus If i recall correctly...
12:23 Micromus http://arstechnica.com/civis/viewtopic.php?f=21&amp;t=1192993
12:23 Micromus I seem to be wrong
12:24 Micromus ofc, if you do async replication, problem solved
12:26 M28 uh
12:27 M28 async replication = geo-replication?
12:28 Micromus I think so
12:31 M28 I'm serving large files, so even a 50 ms delay for the initial connection doesn't really matter to me
12:31 M28 can I still have those nodes split? :s
12:31 M28 I guess there's really only one way to find out...
12:31 diegows joined #gluster
12:55 glusterbot News from newglusterbugs: [Bug 1173953] typos <https://bugzilla.redhat.com/show_bug.cgi?id=1173953>
13:03 soumya joined #gluster
13:13 Pupeno joined #gluster
13:16 T3 joined #gluster
13:39 rolfb joined #gluster
13:56 theron joined #gluster
14:01 pcaruana joined #gluster
14:08 juhaj The fuse client reads a distributed volume in parallel, does it not? I.e. bandwidth = #brick*(bw of slowest brick). How does the NFS-glusterfs work in this respect?
14:14 rotbeard joined #gluster
14:32 strata joined #gluster
14:32 strata diagnostics.brick-log-level           ( DEBUG|INFO|WARNING|ERROR|CRITICAL|NONE|TRACE )
14:32 strata diagnostics.client-log-level          (DEBUG|INFO|WARNING|ERROR|CRITICAL|NONE|TRACE )
14:33 strata ^ will these options make the 12GB of logs i have accumulated over 4 days go away? :)
14:33 M28 12 GB in 4 days? wow
14:34 strata i am new to gluster but it is awesome. but my root partition is only 20GB and i'm going to run out of space if gluster logs this much stuff.
14:56 M28 joined #gluster
15:00 M28 joined #gluster
15:08 social joined #gluster
15:12 theron joined #gluster
15:22 hagarth joined #gluster
15:30 lyang0 joined #gluster
15:51 feeshon joined #gluster
16:07 feeshon joined #gluster
16:09 rjoseph joined #gluster
16:12 _Bryan_ joined #gluster
17:01 vimal joined #gluster
17:09 n-st joined #gluster
17:17 T3 joined #gluster
17:19 M28_ joined #gluster
17:32 loki_ joined #gluster
17:32 loki_ Hi all
17:33 loki_ new to gluster .. i'm not sure what should be the layout of my disk, should i build a raid array and setup gluster over it, or just setup over all disks without a raid (JBOD) .. ?
17:44 mrEriksson That pretty much depends on your requirements and if you are planing to use gluster replication etc.
17:44 loki_ Well, we're going to buy a 4U storage server soon
17:45 loki_ Starting with less drives and adding more as we scale
17:45 mrEriksson I use RAID-5 on rather large drives (3TB+) which is rather unsafe due to the rebuild-time if a drive fails. But since I use replication in gluster, I feel that it is pretty safe anyways, since each brick is replicated to two different raidsets on different hosts
17:46 loki_ The thing is we're not going to have multiple nodes of storage, for now only that 4U server
17:46 loki_ So my plan was to setup RAID-10 arrays, backed by LSI CacheCade with SSDs
17:46 loki_ storage needs to be accessible via iSCSI
17:47 loki_ And as i will bring more RAID-10 arrays to the san .. i was thinking using glusterfs on a single node could help me scale easily as I add more storage arrays ..
17:47 mrEriksson Then I would not go for JBOD
17:48 mrEriksson Perhaps you could replicate bricks over different drivers in the same host, but then I think you would be better of letting the hardware handle this
17:48 mrEriksson In what way do you need storage to be accessible via iSCSI? Gluster doesn't do iSCSI
17:48 loki_ http://www.gluster.org/community/documentation/index.php/GlusterFS_iSCSI ?
17:49 mrEriksson (Bricks can of course be on iSCSI targets, but gluster won't export to iSCSI. Not that I know of anyways)
17:49 mrEriksson Oh, sweet, hadn't seen that one
17:51 loki_ So i would setup my raid arrays as usual, and make then available via Ceph with no replication
17:51 mrEriksson Ceph?
17:51 loki_ heh, gluster :)
17:51 mrEriksson :P
17:52 mrEriksson The iSCSI addon seems pretty sweet, but I'm not sure how well it will scale
17:52 mrEriksson The awesome thing about gluster is that clients can talk to multiple hosts if bricks are distributed/replicated, which increases availability and performance
17:54 loki_ Well i'm just trying to think long-term here ..
17:55 loki_ For now, i could just add iSCSI targets over that same NAS .. but that will be complex to scale, even on the same server, and will eventualy be a problem the day we get a second 4U node
17:56 mrEriksson So how come you need iSCSI? VMWare or such?
17:58 loki_ Using SolusVM, they do not support centralized storage (except for iSCSI)
17:58 mrEriksson Ah, I see
17:59 mrEriksson We use OpenNebula, works like a charm with Gluster
18:00 loki_ Trying to stay far from vmware :P
18:00 mrEriksson OpenNebula isn't vmware :)
18:00 mrEriksson We don't use vmware either
18:01 loki_ ok supports KVM
18:02 mrEriksson Among others
18:02 mrEriksson I've used it with XEN and KVM
18:02 loki_ Do you know if it does works in a setup where server only has one NIC's ?
18:02 mrEriksson But there is also support for Hyper-V and VMWare
18:02 mrEriksson On the hosts?
18:03 loki_ Yes.. we have access to only one NIC per phys. server via our virtual switch.. and this is the main reason we don't use openstack
18:03 mrEriksson There are many other reasons to not use openstack too :-)
18:03 mrEriksson But yes
18:04 loki_ Looking for alternatives to Solusvm, considering oVirt but does not seems the best for multi-tenancy
18:04 loki_ mrEriksson just curious, why you choosed OpenNebula over Openstack ?
18:05 mrEriksson We bundle all nic:s in each host using bonding and add vlan:s on top of that for different services, guest networks, gluster etc, and let the switches manage and prioritize the traffic based on the vlans
18:06 loki_ ok so it does play well with one nic without creating tons of useless bridges tunelled over GRE (openstack here)..
18:06 mrEriksson I was contracted to do a comparison between OpenStack and OpenNebula (200 hours or so) and OpenNebula was the stronger of the two at that time. After that, we moved to OpenNebula in our environment too
18:06 mrEriksson No, No GRE's
18:07 mrEriksson Though, depending on the network model you choose, there will be bridges for guest access on the hosts
18:07 loki_ our nas is planned to have 40gbps, and our nodes has 10gbps bandwith.. really that traffic isolated trough vlans should be just fast..
18:09 mrEriksson Sounds sweet
18:09 loki_ Was hoping solusvm would benefit from some updates, after their acquisition by onapp
18:10 mrEriksson We use 4x1GBps in each host for now, was planing to move to 10Gbps with time, but haven't seen any need for it so far
18:11 loki_ But it's still the same as 1 yr ago, it work's but getting old..
18:11 loki_ You are a cloud provider ? or using nebula for private biz. cloud ?
18:12 mrEriksson Ok, OpenNebula is punching out new releases with new features four times per year, which is pretty sweet. (If you are able to keep up with that :-))
18:13 mrEriksson We use it both for our cloud environment and as the management interface in private setups for customers
18:14 T3 I have a 2-node volume replication setup that was working fine. Now I'm introducing firewalls on the way. I create a rule for every required port, allowing access from both localhost (127.0.0.1) and the other node IP. Now let me explain problem #1. Each server is a glusterfs brick provider, and I mount the "client" on the same server. So Server 1 mounts glusterfs on Server 1, and Server 2 on Server 2. After firewalls were introduced, Server 1 is n
18:14 T3 ot mounting after reboots. I use a line on rc.local to do that:
18:14 T3 # Mount GlusterFS cluster partition
18:14 T3 mount -t glusterfs -o log-level=INFO,log-file=/var/log/gluster.log server1.something.com:/site-images /var/www/site-images
18:14 T3 this is from server1 ^^
18:15 loki_ mrEriksson well i'd prefer not having time to review all new features, instead of asking support for new ones .. heh.
18:15 T3 any ideas?
18:15 T3 I have checked the firewall rules N times. It works fine on server2
18:15 mrEriksson T3: Can you access the brick port via localhost from server1?
18:16 mrEriksson loki_: That's what the commercial support is for :)
18:16 kovshenin joined #gluster
18:18 T3 # telnet images1.mhmfun.com 49152
18:18 T3 Trying 127.0.1.1...
18:18 T3 Connected to web3.
18:19 T3 seems so.. although I don't understand the 0.1.1
18:19 T3 mrEriksson, ^^
18:19 mrEriksson Check your hsots-file
18:19 mrEriksson hosts
18:20 T3 yeah, fat finger :|
18:20 chirino joined #gluster
18:21 T3 mrEriksson, ok, same result, now with the correct localhost ip
18:22 mrEriksson So are you able to telnet to the port using localhost?
18:23 T3 yes
18:23 T3 well.. just tested one. let me test all the others
18:26 systemonkey joined #gluster
18:29 zerick joined #gluster
18:33 theron joined #gluster
18:39 mrEriksson I'm off for today, good luck guys!
18:39 T3 mrEriksson, it mounted now. I guess the problem was with the typo on /etc/hosts
18:39 T3 mrEriksson, thank you!
18:39 mrEriksson T3: Probably, seen that a couple of times too
18:39 T3 have a good resting
18:39 mrEriksson Laters!
18:50 Dw_Sn joined #gluster
18:51 elico joined #gluster
18:51 T3 When I run # gluster volume status, the Self-heal Daemon lines always come with Port N/A, but they are marked as online (Y), like this:
18:51 T3 Self-heal Daemon on localhostN/A
18:51 T3 Self-heal Daemon on localhostN/AY1653
18:52 T3 anybody know why is that?
18:52 T3 I'm starting to think this is about not "actually running" Self-heal at this point in time.. so no port is in use
18:52 T3 but this is just a rough guess
18:56 kovshenin joined #gluster
19:05 feeshon joined #gluster
19:19 zerick joined #gluster
19:30 kovshenin joined #gluster
19:31 kovshenin joined #gluster
19:56 glusterbot News from newglusterbugs: [Bug 993433] Volume quota report is human readable only, not machine readable <https://bugzilla.redhat.com/show_bug.cgi?id=993433>
19:56 glusterbot News from newglusterbugs: [Bug 1031817] Setting a quota for the root of a volume changes the reported volume size <https://bugzilla.redhat.com/show_bug.cgi?id=1031817>
19:56 glusterbot News from newglusterbugs: [Bug 1070685] glusterfs ipv6 functionality not working <https://bugzilla.redhat.com/show_bug.cgi?id=1070685>
19:56 glusterbot News from resolvedglusterbugs: [Bug 831699] Handle multiple networks better <https://bugzilla.redhat.com/show_bug.cgi?id=831699>
19:56 glusterbot News from resolvedglusterbugs: [Bug 885424] File operations occur as root regardless of original user on 32-bit nfs client <https://bugzilla.redhat.com/show_bug.cgi?id=885424>
19:56 glusterbot News from resolvedglusterbugs: [Bug 907540] Gluster fails to start many volumes <https://bugzilla.redhat.com/show_bug.cgi?id=907540>
19:56 glusterbot News from resolvedglusterbugs: [Bug 911361] Bricks grow when other bricks heal <https://bugzilla.redhat.com/show_bug.cgi?id=911361>
19:56 glusterbot News from resolvedglusterbugs: [Bug 956247] Quota enforcement unreliable <https://bugzilla.redhat.com/show_bug.cgi?id=956247>
19:56 glusterbot News from resolvedglusterbugs: [Bug 960141] NFS no longer responds, get  "Reply submission failed" errors <https://bugzilla.redhat.com/show_bug.cgi?id=960141>
19:56 glusterbot News from resolvedglusterbugs: [Bug 960867] failover doesn't work when a hdd part of hardware raid massive becomes broken <https://bugzilla.redhat.com/show_bug.cgi?id=960867>
19:56 glusterbot News from resolvedglusterbugs: [Bug 961197] glusterd fails to read from the nfs socket every 3 seconds if all volumes are set nfs.disable <https://bugzilla.redhat.com/show_bug.cgi?id=961197>
19:56 glusterbot News from resolvedglusterbugs: [Bug 961506] getfattr can hang when trying to get an attribute that doesn't exist <https://bugzilla.redhat.com/show_bug.cgi?id=961506>
19:56 glusterbot News from resolvedglusterbugs: [Bug 971528] Gluster fuse mount corrupted <https://bugzilla.redhat.com/show_bug.cgi?id=971528>
19:56 glusterbot News from resolvedglusterbugs: [Bug 974886] timestamps of brick1 and brick2 is not the same. <https://bugzilla.redhat.com/show_bug.cgi?id=974886>
19:56 glusterbot News from resolvedglusterbugs: [Bug 978297] Glusterfs self-heal daemon crash on split-brain replicate log too big <https://bugzilla.redhat.com/show_bug.cgi?id=978297>
19:56 glusterbot News from resolvedglusterbugs: [Bug 983676] 2.6.39-400.109.1.el6uek.x86_64 doesn't work with GlusterFS 3.3.1 <https://bugzilla.redhat.com/show_bug.cgi?id=983676>
19:56 glusterbot News from resolvedglusterbugs: [Bug 990220] Group permission with high GID Number (200090480) is not being honored by Gluster <https://bugzilla.redhat.com/show_bug.cgi?id=990220>
19:56 glusterbot News from resolvedglusterbugs: [Bug 997889] VM filesystem read-only <https://bugzilla.redhat.com/show_bug.cgi?id=997889>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1005616] glusterfs client crash (signal received: 6) <https://bugzilla.redhat.com/show_bug.cgi?id=1005616>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1005860] GlusterFS: Can't add a third brick to a volume - "Number of Bricks" is messed up <https://bugzilla.redhat.com/show_bug.cgi?id=1005860>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1005862] GlusterFS: Can't add a new peer to the cluster - "Number of Bricks" is messed up <https://bugzilla.redhat.com/show_bug.cgi?id=1005862>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1007346] gluster 3.4 write <https://bugzilla.redhat.com/show_bug.cgi?id=1007346>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1016482] Owner of some directories become root <https://bugzilla.redhat.com/show_bug.cgi?id=1016482>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1023309] geo-replication command failed <https://bugzilla.redhat.com/show_bug.cgi?id=1023309>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1023636] Inconsistent UUID's not causing an error that would stop the system <https://bugzilla.redhat.com/show_bug.cgi?id=1023636>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1024181] Unicode filenames cause directory listing interactions to hang/loop <https://bugzilla.redhat.com/show_bug.cgi?id=1024181>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1040862] volume status detail command cause fd leak <https://bugzilla.redhat.com/show_bug.cgi?id=1040862>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1045426] geo-replication failed with: (xtime) failed on peer with OSError, when use non-privileged user <https://bugzilla.redhat.com/show_bug.cgi?id=1045426>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1095179] Gluster volume inaccessible on all bricks after a glusterfsd segfault on one brick <https://bugzilla.redhat.com/show_bug.cgi?id=1095179>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1147107] Cannot set distribute.migrate-data xattr on a file <https://bugzilla.redhat.com/show_bug.cgi?id=1147107>
19:56 glusterbot News from resolvedglusterbugs: [Bug 822675] Errors using GlusterFS files through bind mount <https://bugzilla.redhat.com/show_bug.cgi?id=822675>
19:56 glusterbot News from resolvedglusterbugs: [Bug 831677] Add dict_set_transient_str <https://bugzilla.redhat.com/show_bug.cgi?id=831677>
19:56 glusterbot News from resolvedglusterbugs: [Bug 970224] Under heavy load, Grid Engine array jobs fail; write permission <https://bugzilla.redhat.com/show_bug.cgi?id=970224>
19:56 glusterbot News from resolvedglusterbugs: [Bug 884325] Enforce RPM dependencies for new swift removal refactoring <https://bugzilla.redhat.com/show_bug.cgi?id=884325>
19:56 glusterbot News from resolvedglusterbugs: [Bug 810046] GlusterFS replica mount hangs forever on "toxic" file healing. <https://bugzilla.redhat.com/show_bug.cgi?id=810046>
19:56 glusterbot News from resolvedglusterbugs: [Bug 828039] ping_pong fails on fuse/nfs mount when new bricks are added to distribute volume <https://bugzilla.redhat.com/show_bug.cgi?id=828039>
19:56 glusterbot News from resolvedglusterbugs: [Bug 861297] [enhancement]: Allow self-heal to balance sources in replica sets greater than two <https://bugzilla.redhat.com/show_bug.cgi?id=861297>
19:56 glusterbot News from resolvedglusterbugs: [Bug 861308] lookup blocked while waiting for self-heal that fails due to pre-existing locks <https://bugzilla.redhat.com/show_bug.cgi?id=861308>
19:56 glusterbot News from resolvedglusterbugs: [Bug 872601] split-brain caused by %preun% script if server rpm is upgraded during self-heal <https://bugzilla.redhat.com/show_bug.cgi?id=872601>
19:56 glusterbot News from resolvedglusterbugs: [Bug 872703] sticky-pointer with no trusted.dht.linkto after a replace-brick commit force, heal full migration <https://bugzilla.redhat.com/show_bug.cgi?id=872703>
19:56 glusterbot News from resolvedglusterbugs: [Bug 917686] Self-healing does not physically replicate content of new file/dir <https://bugzilla.redhat.com/show_bug.cgi?id=917686>
19:56 glusterbot News from resolvedglusterbugs: [Bug 1056276] Self-Heal Daemon is consuming excessive CPU <https://bugzilla.redhat.com/show_bug.cgi?id=1056276>
19:57 glusterbot News from resolvedglusterbugs: [Bug 971630] opening & writing of file(s) mixed with reading file(s) causing empty content <https://bugzilla.redhat.com/show_bug.cgi?id=971630>
19:57 glusterbot News from resolvedglusterbugs: [Bug 844757] improve object storage logging <https://bugzilla.redhat.com/show_bug.cgi?id=844757>
19:57 glusterbot News from resolvedglusterbugs: [Bug 910188] swift checks for mount points which are alphanumeric and gluster uses ones with dashes <https://bugzilla.redhat.com/show_bug.cgi?id=910188>
19:57 glusterbot News from resolvedglusterbugs: [Bug 911803] Gluster/Swift integration code should handle file system errors more gracefully <https://bugzilla.redhat.com/show_bug.cgi?id=911803>
19:57 glusterbot News from resolvedglusterbugs: [Bug 911914] Separate thread is used for fsync() calls on small files <https://bugzilla.redhat.com/show_bug.cgi?id=911914>
19:57 glusterbot News from resolvedglusterbugs: [Bug 912053] Remove suggested initial values for connection and node timeouts <https://bugzilla.redhat.com/show_bug.cgi?id=912053>
19:57 glusterbot News from resolvedglusterbugs: [Bug 963176] G4S: there is no automounting for gluster volumes for UFO thus all REST request fails,PUT with 404 ,GET/HEAD with 503 <https://bugzilla.redhat.com/show_bug.cgi?id=963176>
19:57 glusterbot News from resolvedglusterbugs: [Bug 1001418] Upgrade from RHS2.0-U5 to U6 results in broken gluster-swift services, it  gives 503 for every request <https://bugzilla.redhat.com/show_bug.cgi?id=1001418>
19:57 glusterbot News from resolvedglusterbugs: [Bug 950024] replace-brick immediately saturates IO on source brick causing the entire volume to be unavailable, then dies <https://bugzilla.redhat.com/show_bug.cgi?id=950024>
19:57 glusterbot News from resolvedglusterbugs: [Bug 955546] TCP connections are stacking on master geo-replication side if the slave rejects the master IP. <https://bugzilla.redhat.com/show_bug.cgi?id=955546>
19:57 glusterbot News from resolvedglusterbugs: [Bug 849770] Probing the first host won't update the hostname on the second host <https://bugzilla.redhat.com/show_bug.cgi?id=849770>
19:57 glusterbot News from resolvedglusterbugs: [Bug 851068] replace-brick reports "Migration complete" while data are not migrated <https://bugzilla.redhat.com/show_bug.cgi?id=851068>
19:57 glusterbot News from resolvedglusterbugs: [Bug 950006] replace-brick activity dies, destination glusterfs spins at 100% CPU forever <https://bugzilla.redhat.com/show_bug.cgi?id=950006>
19:57 glusterbot News from resolvedglusterbugs: [Bug 773493] File size initially wrong after a replica volumes image failure and repair. <https://bugzilla.redhat.com/show_bug.cgi?id=773493>
19:57 glusterbot News from resolvedglusterbugs: [Bug 832609] Glusterfsd hangs if brick filesystem becomes unresponsive, causing all clients to lock up <https://bugzilla.redhat.com/show_bug.cgi?id=832609>
19:57 glusterbot News from resolvedglusterbugs: [Bug 852221] Crash in fuse_thread_proc on Fedora 17 <https://bugzilla.redhat.com/show_bug.cgi?id=852221>
19:57 glusterbot News from resolvedglusterbugs: [Bug 846619] Client doesn't reconnect after server comes back online <https://bugzilla.redhat.com/show_bug.cgi?id=846619>
19:57 glusterbot News from resolvedglusterbugs: [Bug 864963] Heal-failed and Split-brain messages are not cleared after resolution of issue <https://bugzilla.redhat.com/show_bug.cgi?id=864963>
19:57 glusterbot News from resolvedglusterbugs: [Bug 835494] Volume creation fails and gives error "<brickname> or a prefix of it is already part of a volume", eventhough that brick is not part of any volume. <https://bugzilla.redhat.com/show_bug.cgi?id=835494>
19:57 glusterbot News from resolvedglusterbugs: [Bug 848543] brick directory is automatically recreated, e.g. when disk not mounted <https://bugzilla.redhat.com/show_bug.cgi?id=848543>
19:57 glusterbot News from resolvedglusterbugs: [Bug 861947] Large writes in KVM host slow on fuse, but full speed on nfs <https://bugzilla.redhat.com/show_bug.cgi?id=861947>
19:57 glusterbot News from resolvedglusterbugs: [Bug 862347] Migration with "remove-brick start" fails if bricks are more than half full <https://bugzilla.redhat.com/show_bug.cgi?id=862347>
19:57 glusterbot News from resolvedglusterbugs: [Bug 885861] implement alias capability so more than one name can refer to the same volume <https://bugzilla.redhat.com/show_bug.cgi?id=885861>
19:57 glusterbot News from resolvedglusterbugs: [Bug 895831] auth.allow limited to 1024 chars in 3.2.5 and perhaps later versions, can you increase to something much bigger or allow unlimited length or see bug 861932 <https://bugzilla.redhat.com/show_bug.cgi?id=895831>
19:57 glusterbot News from resolvedglusterbugs: [Bug 903873] Ports show as N/A in status <https://bugzilla.redhat.com/show_bug.cgi?id=903873>
19:57 glusterbot News from resolvedglusterbugs: [Bug 928781] hangs when mount a volume at own brick <https://bugzilla.redhat.com/show_bug.cgi?id=928781>
19:57 glusterbot News from resolvedglusterbugs: [Bug 951177] glusterd silently fails if a peer file is empty <https://bugzilla.redhat.com/show_bug.cgi?id=951177>
19:57 glusterbot News from resolvedglusterbugs: [Bug 963335] glusterd enters D state after replace-brick abort operation <https://bugzilla.redhat.com/show_bug.cgi?id=963335>
19:57 glusterbot News from resolvedglusterbugs: [Bug 830106] gluster volume status reports incorrect status message <https://bugzilla.redhat.com/show_bug.cgi?id=830106>
19:57 glusterbot News from resolvedglusterbugs: [Bug 858275] Gluster volume status doesn't show disconnected peers <https://bugzilla.redhat.com/show_bug.cgi?id=858275>
19:57 glusterbot News from resolvedglusterbugs: [Bug 901332] glustershd and nfs services are not restarted during an upgrade <https://bugzilla.redhat.com/show_bug.cgi?id=901332>
19:57 glusterbot News from resolvedglusterbugs: [Bug 949625] Peer rejected after upgrading <https://bugzilla.redhat.com/show_bug.cgi?id=949625>
19:57 glusterbot News from resolvedglusterbugs: [Bug 865812] glusterd stops responding after ext4 errors <https://bugzilla.redhat.com/show_bug.cgi?id=865812>
19:57 glusterbot News from resolvedglusterbugs: [Bug 948729] gluster volume create command creates brick directory in / of storage node if the specified directory does not exist <https://bugzilla.redhat.com/show_bug.cgi?id=948729>
19:57 glusterbot News from resolvedglusterbugs: [Bug 851381] Failed to access the directory, "Stale NFS file handle" <https://bugzilla.redhat.com/show_bug.cgi?id=851381>
19:57 glusterbot News from resolvedglusterbugs: [Bug 856943] unalbe to recreate directory while file under directory is opened <https://bugzilla.redhat.com/show_bug.cgi?id=856943>
19:57 glusterbot News from resolvedglusterbugs: [Bug 916934] Rebalance failures/Very slow <https://bugzilla.redhat.com/show_bug.cgi?id=916934>
19:57 glusterbot News from resolvedglusterbugs: [Bug 802423] GlusterFS does not work well with MS Office 2010 and Samba "posix locking = yes". <https://bugzilla.redhat.com/show_bug.cgi?id=802423>
19:57 glusterbot News from resolvedglusterbugs: [Bug 764927] Unstable Replication, Followed By Volume Crash <https://bugzilla.redhat.com/show_bug.cgi?id=764927>
19:57 glusterbot News from resolvedglusterbugs: [Bug 765266] Massive amount of missing files after brick power outage <https://bugzilla.redhat.com/show_bug.cgi?id=765266>
19:57 glusterbot News from resolvedglusterbugs: [Bug 765439] Split-Brain Condition Not working as expected <https://bugzilla.redhat.com/show_bug.cgi?id=765439>
19:57 glusterbot News from resolvedglusterbugs: [Bug 765550] Attempted access to split-brain file causes segfault in pthread_spin_lock(). <https://bugzilla.redhat.com/show_bug.cgi?id=765550>
19:57 glusterbot News from resolvedglusterbugs: [Bug 811329] self heal - dirty afr flags after successfull stat: the regular non-empty file two equally dirty servers case <https://bugzilla.redhat.com/show_bug.cgi?id=811329>
19:57 glusterbot News from resolvedglusterbugs: [Bug 787509] Slow client response if log partition is full <https://bugzilla.redhat.com/show_bug.cgi?id=787509>
19:57 glusterbot News from resolvedglusterbugs: [Bug 839810] RDMA high cpu usage and poor performance <https://bugzilla.redhat.com/show_bug.cgi?id=839810>
19:57 glusterbot News from resolvedglusterbugs: [Bug 764063] Debian package does not depend on fuse <https://bugzilla.redhat.com/show_bug.cgi?id=764063>
19:57 glusterbot News from resolvedglusterbugs: [Bug 765416] Log rotate does not throw proper error message <https://bugzilla.redhat.com/show_bug.cgi?id=765416>
19:57 glusterbot News from resolvedglusterbugs: [Bug 868314] replace-brick should be able to continue <https://bugzilla.redhat.com/show_bug.cgi?id=868314>
19:57 glusterbot News from resolvedglusterbugs: [Bug 1058666] Enable fusermount by default, make nightly autobuilding work <https://bugzilla.redhat.com/show_bug.cgi?id=1058666>
19:57 glusterbot News from resolvedglusterbugs: [Bug 764924] Instability and High Server Memory Usage When Using RDMA Transport <https://bugzilla.redhat.com/show_bug.cgi?id=764924>
20:05 elico joined #gluster
20:10 strata joined #gluster
20:16 systemonkey joined #gluster
20:26 glusterbot News from newglusterbugs: [Bug 1073763] network.compression fails simple '--ioengine=sync' fio test <https://bugzilla.redhat.com/show_bug.cgi?id=1073763>
20:26 glusterbot News from newglusterbugs: [Bug 1174016] network.compression fails simple '--ioengine=sync' fio test <https://bugzilla.redhat.com/show_bug.cgi?id=1174016>
20:26 glusterbot News from resolvedglusterbugs: [Bug 1039291] glusterfs-libs-3.5.0-0.1.qa3.fc21.x86_64.rpm requires rsyslog-mmjsonparse; this brings in rsyslog, ... <https://bugzilla.redhat.com/show_bug.cgi?id=1039291>
20:45 Pupeno_ joined #gluster
20:56 LebedevRI joined #gluster
20:56 glusterbot News from newglusterbugs: [Bug 1099922] Unchecked buffer fill by gf_readline in gf_history_changelog_next_change <https://bugzilla.redhat.com/show_bug.cgi?id=1099922>
20:56 glusterbot News from resolvedglusterbugs: [Bug 1108850] GlusterFS hangs which causes zombie processes <https://bugzilla.redhat.com/show_bug.cgi?id=1108850>
20:56 glusterbot News from resolvedglusterbugs: [Bug 1089470] SMB: Crash on brick process during compile kernel. <https://bugzilla.redhat.com/show_bug.cgi?id=1089470>
21:01 vimal joined #gluster
21:10 LebedevRI joined #gluster
21:11 theron joined #gluster
21:26 glusterbot News from newglusterbugs: [Bug 916375] Incomplete NLMv4 spec compliance: asynchronous requests and responses <https://bugzilla.redhat.com/show_bug.cgi?id=916375>
21:39 elico joined #gluster
21:48 gildub_ joined #gluster
21:56 daMaestro joined #gluster
22:09 cfeller joined #gluster
22:12 theron joined #gluster
22:12 rotbeard joined #gluster
22:34 badone joined #gluster
22:45 Pupeno joined #gluster
22:47 Pupeno joined #gluster
23:13 theron joined #gluster
23:25 Pupeno joined #gluster
23:28 mrEriksson joined #gluster
23:34 Pupeno joined #gluster
23:36 pdrakeweb joined #gluster
23:41 TrDS left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary