Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 Lee- joined #gluster
00:18 ira joined #gluster
00:22 krink joined #gluster
00:37 plarsen joined #gluster
00:42 krink joined #gluster
01:20 hagarth joined #gluster
01:40 badone joined #gluster
01:41 smoothbutta joined #gluster
01:42 TheCthulhu1 joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:23 nangthang joined #gluster
02:30 kdhananjay joined #gluster
03:56 TheSeven joined #gluster
04:34 maveric_amitc_ joined #gluster
05:12 DV joined #gluster
05:20 woakes070048 joined #gluster
05:38 natarej joined #gluster
05:39 autoditac joined #gluster
05:39 natarej hey guys, is there a feature list for gluster anywhere?
05:45 tessier joined #gluster
05:50 ekuric joined #gluster
06:06 maveric_amitc_ joined #gluster
06:09 Folken joined #gluster
06:09 Folken hi
06:09 glusterbot Folken: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:10 Folken I have a disperse volume which has a problem
06:10 Folken gluster> volume heal datapoint info
06:10 Folken Brick DarkChild:/glusterfs/
06:10 Folken <gfid:e6c5ed1d-77cf-4b0e-941a-53d1abd0f9d1>
06:10 Folken <gfid:d3e21cfb-162f-4643-aa19-44e138eadf8d>
06:10 Folken Number of entries: 2
06:11 Folken the file it is stuck on was manually removed from each brick
06:11 Folken how do I remove the metadata/gfid reference for it?
06:45 krink joined #gluster
06:51 al joined #gluster
07:11 ashiq joined #gluster
07:17 ashiq joined #gluster
07:18 hchiramm joined #gluster
07:30 haomaiwang joined #gluster
07:30 jcastill1 joined #gluster
07:35 ekuric left #gluster
07:35 jcastillo joined #gluster
07:44 hgowtham joined #gluster
07:55 kovshenin joined #gluster
08:03 DV joined #gluster
08:06 DV__ joined #gluster
08:06 harish_ joined #gluster
08:58 nsoffer joined #gluster
09:07 ghenry joined #gluster
09:07 ghenry joined #gluster
09:26 harish_ joined #gluster
09:39 maveric_amitc_ joined #gluster
09:45 harish_ joined #gluster
10:00 elico joined #gluster
10:05 glusterbot News from resolvedglusterbugs: [Bug 1218570] `gluster volume heal <vol-name> split-brain' tries to heal even with insufficient arguments <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218570>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1227235] glusterfsd crashed on a quota enabled volume where snapshots were scheduled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227235>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1230691] [geo-rep]: use_meta_volume config option should be validated for its values <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230691>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1231213] [geo-rep]: rsync should be made dependent package for geo-replication <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231213>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1229100] Do not invoke glfs_fini for glfs-heal processes. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1229100>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1232602] bug-857330/xml.t fails spuriously <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232602>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1227615] "Snap_scheduler disable" should have different return codes for different failures. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227615>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1228181] Simplify creation and set-up of meta-volume (shared storage) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1228181>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1228592] Glusterd fails to start after volume restore, tier attach and node reboot <https://bugzilla.redhat.co​m/show_bug.cgi?id=1228592>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1230018] [SNAPSHOT]: Initializing snap_scheduler from all nodes at the same time should give proper error message <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230018>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1230167] [Snapshot] Python crashes with trace back notification when shared storage is unmount from Storage Node <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230167>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1221473] BVT: Posix crash while running BVT on 3.7beta2 build on rhel6.6 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221473>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1221941] glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221941>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1228160] linux untar hanged after the bricks are up in a 8+4 config <https://bugzilla.redhat.co​m/show_bug.cgi?id=1228160>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1230653] Disperse volume : client crashed while running IO <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230653>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1233042] use after free bug in dht <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233042>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1233484] Possible double execution of the state machine for fops that start other subfops <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233484>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1228045] Scrubber should be disabled once bitrot is reset <https://bugzilla.redhat.co​m/show_bug.cgi?id=1228045>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1228065] Though brick demon is not running, gluster vol status command shows the pid <https://bugzilla.redhat.co​m/show_bug.cgi?id=1228065>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1231646] [glusterd] glusterd crashed while trying to remove a bricks - one selected from each replica set - after shrinking nX3 to nX2 to nX1 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231646>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1231832] bitrot: (rfe) object signing wait time value should be tunable. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231832>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1232589] [Bitrot] Gluster v set <volname> bitrot enable command succeeds , which is not supported to enable bitrot <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232589>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1223390] packaging: .pc files included in -api-devel should be in -devel <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223390>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1226962] nfs-ganesha: Getting issues for nfs-ganesha on new nodes of glusterfs,error is /etc/ganesha/ganesha-ha.conf: line 11: VIP_<hostname with fqdn>=<ip>: command not found <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226962>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1230694] [Backup]: Glusterfind pre fails with htime xattr updation error resulting in historical changelogs not available <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230694>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1233044] [geo-rep]: Segmentation faults are observed on all the master nodes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233044>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1227167] NFS: IOZone tests hang, disconnects and hung tasks seen in logs. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227167>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1228601] [Virt-RHGS] Creating a image on gluster volume using qemu-img + gfapi throws error messages related to rpc_transport <https://bugzilla.redhat.co​m/show_bug.cgi?id=1228601>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1226880] Fix infinite looping in shard_readdir(p) on '/' <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226880>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1227572] Sharding - Fix posix compliance test failures. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227572>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1227576] libglusterfs: Copy _all_ members of gf_dirent_t in entry_copy() <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227576>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1229550] [AFR-V2] - Fix shd coredump from tests/bugs/glusterd/bug-948686.t <https://bugzilla.redhat.co​m/show_bug.cgi?id=1229550>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1232155] Not able to export volume using nfs-ganesha <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232155>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1225548] [Backup]: Misleading error message when glusterfind delete is given with non-existent volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225548>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1225565] [Backup]: RFE - Glusterfind CLI commands need to respond based on volume's start/stop state <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225565>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1230783] [Backup]: Crash observed when glusterfind pre is run after deleting a directory containing files <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230783>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1230791] [Backup]: 'Glusterfind list' should display an appropriate output when there are no active sessions <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230791>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1223890] readdirp return 64bits inodes even if enable-ino32 is set <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223890>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1228510] Building packages on RHEL-5 based distributions fails <https://bugzilla.redhat.co​m/show_bug.cgi?id=1228510>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1231366] NFS Authentication Performance Issue <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231366>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1221656] rebalance failing on one of the node <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221656>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1225320] ls command failed with features.read-only on while mounting ec volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225320>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1226272] Volume heal info not reporting files in split brain and core dumping, after upgrading to 3.7.0 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226272>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1230350] Client hung up on listing the files on a perticular directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230350>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1225809] [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration will be lost once the migration is done <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225809>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1225859] Glusterfs client crash during fd migration after graph switch <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225859>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1227674] Honour afr self-heal volume set options from clients <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227674>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1230693] [geo-rep]: RENAME are not synced to slave when quota is enabled. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230693>
10:05 glusterbot News from resolvedglusterbugs: [Bug 1225574] [geo-rep]: client-rpc-fops.c:172:client3_3_symlink_cbk can be handled better/or ignore these messages in the slave cluster log <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225574>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1225940] DHT: lookup-unhashed feature breaks runtime compatibility with older client versions <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225940>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1225999] Update gluster op version to 30701 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225999>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1227887] Update gluster op version to 30702 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227887>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1227677] Glusterd crashes and cannot start after rebalance <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227677>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1227611] Fix deadlock in timer-wheel del_timer() API <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227611>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1232179] Objects are not signed upon truncate() <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232179>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1232135] Quota:  " E [quota.c:1197:quota_check_limit] 0-ecvol-quota: Failed to check quota size limit" in brick logs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232135>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1225796] Spurious failure in tests/bugs/disperse/bug-1161621.t <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225796>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1230563] tiering:glusterd crashed when trying to detach-tier commit force on a non-tiered volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230563>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1229331] Disperse volume : glusterfs crashed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1229331>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1230712] [Backup]: Chown/chgrp for a directory does not get recorded as a MODIFY entry in the outfile <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230712>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1226117] [RFE] Return proper error codes in case of snapshot failure <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226117>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1227028] nfs-ganesha: Discrepancies with lock states recovery during migration <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227028>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1232002] nfs-ganesha: 8 node pcs cluster setup fails <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232002>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1233056] Not able to create snapshots for geo-replicated volumes when session is created with root user <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233056>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1222065] GlusterD fills the logs when the NFS-server is disabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1222065>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1232143] nfs-ganesha: trying to bring up nfs-ganesha on three node shows error although pcs status and ganesha process on all three nodes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232143>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1232335] nfs-ganesha: volume is not in list of exports in case of volume stop followed by volume start <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232335>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1230560] data tiering: do not allow tiering related volume set options on a regular volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230560>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1228729] nfs-ganesha: rmdir logs "remote operation failed: Stale file handle" even though the operation is successful <https://bugzilla.redhat.co​m/show_bug.cgi?id=1228729>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1231516] glusterfsd process on 100% cpu, upcall busy loop in reaper thread <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231516>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1229282] Disperse volume: Huge memory leak of glusterfsd process <https://bugzilla.redhat.co​m/show_bug.cgi?id=1229282>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1233117] quota: quota list displays double the size of previous value, post heal completion. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233117>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1227206] GlusterFS 3.7.2 tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227206>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1194640] Tracker bug for Logging framework expansion. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1194640>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1217722] Tracker bug for Logging framework expansion. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217722>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1225839] [DHT:REBALANCE]: xattrs set on the file during rebalance migration will be lost after migration is over <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225839>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1228100] Disperse volume: brick logs are getting filled with "anonymous fd creation failed" messages <https://bugzilla.redhat.co​m/show_bug.cgi?id=1228100>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1230687] [Backup]: 'New' as well as 'Modify' entry getting recorded for a newly created hardlink <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230687>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1226213] snap_scheduler script must be usable as python module. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226213>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1219953] The python-gluster RPM should be 'noarch' <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219953>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1227916] auth_cache_entry structure barely gets cached <https://bugzilla.redhat.co​m/show_bug.cgi?id=1227916>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1226789] quota: ENOTCONN parodically seen in logs when setting hard/soft timeout during I/O. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226789>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1226792] Statfs is hung because of frame loss in quota <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226792>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1225551] [Backup]: Glusterfind session entry persists even after volume is deleted <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225551>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1230715] [Backup]: Glusterfind delete does not delete the session related information present in $GLUSTERD_WORKDIR <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230715>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1230026] BVT: glusterd crashed and dumped during upgrade (on rhel7.1 server) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230026>
10:06 glusterbot News from resolvedglusterbugs: [Bug 1226224] [RFE] Quota: Make "quota-deem-statfs" option ON, by default, when quota is enabled. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226224>
10:22 sysconfig joined #gluster
10:25 glusterbot News from newglusterbugs: [Bug 1218961] snapshot: Can not activate the name provided while creating snaps to do any further access <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218961>
10:25 glusterbot News from newglusterbugs: [Bug 1219399] NFS interoperability problem: Gluster Striped-Replicated can't read on vmware esxi 5.x NFS client <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219399>
10:25 glusterbot News from newglusterbugs: [Bug 1225077] Fix regression test spurious failures <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225077>
10:30 deniszh joined #gluster
10:31 purpleidea joined #gluster
11:10 elico joined #gluster
11:31 sysconfig joined #gluster
11:43 LebedevRI joined #gluster
11:50 anrao joined #gluster
12:00 anrao joined #gluster
12:05 jrm16020 joined #gluster
12:22 sysconfig_ joined #gluster
12:30 soumya joined #gluster
12:40 sysconfig joined #gluster
12:40 sysconfig_ left #gluster
12:40 sysconfig left #gluster
12:41 sysconfig joined #gluster
12:41 Pupeno joined #gluster
12:43 maveric_amitc_ joined #gluster
12:52 DV joined #gluster
12:57 hagarth joined #gluster
13:03 DV joined #gluster
13:13 DV__ joined #gluster
13:51 nsoffer joined #gluster
14:10 elico joined #gluster
14:14 TheSeven joined #gluster
14:21 krink joined #gluster
14:46 Pupeno joined #gluster
15:13 Pupeno joined #gluster
15:20 chirino joined #gluster
15:21 elico joined #gluster
15:33 hamiller joined #gluster
15:45 elico joined #gluster
15:46 premera joined #gluster
15:47 premera joined #gluster
15:47 premera joined #gluster
15:48 premera joined #gluster
15:48 blonkel joined #gluster
15:49 premera joined #gluster
15:50 premera joined #gluster
15:50 premera joined #gluster
15:50 premera joined #gluster
15:51 blonkel hey, im running glusterfs in a docker env. It works after a few hacks like a charm !  Now i would like to take offline a node and rejoin my glusterfs
15:51 blonkel i tried with this commands: http://pastebin.com/e0vzriBg
15:51 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:51 blonkel but it seems its not working
15:51 premera joined #gluster
15:51 premera joined #gluster
15:52 premera joined #gluster
15:53 blonkel http://fpaste.org/235038/48155941/ heres some status outputs
15:54 blonkel http://fpaste.org/235039/43481565/ sorry about double post
15:54 blonkel any idears why its not resyncing / endpoint not connected?
15:56 jcastill1 joined #gluster
16:01 jcastillo joined #gluster
16:18 blonkel hmm :(
16:18 blonkel it seems glusterfsd is not restartet
16:22 premera joined #gluster
16:22 premera joined #gluster
16:23 premera joined #gluster
16:23 premera joined #gluster
16:23 premera joined #gluster
16:24 premera joined #gluster
16:24 premera joined #gluster
16:25 premera joined #gluster
16:25 hchiramm joined #gluster
16:25 premera joined #gluster
16:26 glusterbot News from newglusterbugs: [Bug 1220173] SEEK_HOLE support (optimization) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220173>
16:37 elico joined #gluster
16:38 aaronott joined #gluster
16:57 elico joined #gluster
16:59 akay1 can anyone point me to 3.7.2 release notes?
17:03 Pupeno joined #gluster
17:03 Pupeno joined #gluster
17:08 elico joined #gluster
17:33 blonkel akay1: https://github.com/gluster/g​lusterfs/commits/release-3.7
17:33 blonkel :P
17:44 Pupeno joined #gluster
18:01 elico joined #gluster
18:12 jiffin joined #gluster
19:42 nsoffer joined #gluster
20:05 VeggieMeat_ joined #gluster
20:06 eclectic joined #gluster
20:06 jermudge- joined #gluster
20:07 frakt_ joined #gluster
20:07 wica_ joined #gluster
20:07 vincent_1dk joined #gluster
20:07 eljrax_ joined #gluster
20:08 tigert_ joined #gluster
20:08 mator_ joined #gluster
20:08 d-fence_ joined #gluster
20:09 khanku_ joined #gluster
20:10 sankarsh` joined #gluster
20:10 nhayashi joined #gluster
20:11 Kins joined #gluster
20:11 RaSTarl joined #gluster
20:12 JoeJulian_ joined #gluster
20:16 ultrabizweb joined #gluster
20:18 ctria joined #gluster
20:20 ccha3 joined #gluster
20:20 DV joined #gluster
20:20 ccha3 joined #gluster
20:21 owlbot joined #gluster
20:21 k-ma joined #gluster
20:21 PatNarciso --netsplits
20:22 partner joined #gluster
20:22 chirino_m joined #gluster
20:22 VeggieMeat joined #gluster
20:23 liewegas joined #gluster
20:24 neoice_ joined #gluster
20:24 khanku joined #gluster
20:24 [7] joined #gluster
20:25 mikemol_ joined #gluster
20:25 abyss__ joined #gluster
20:25 Peppaq joined #gluster
20:27 sankarshan joined #gluster
20:28 nhayashi_ joined #gluster
20:28 ultrabizweb_ joined #gluster
20:28 milkyline_ joined #gluster
20:33 jermudgeon joined #gluster
20:33 tjikkun joined #gluster
20:35 maZtah joined #gluster
20:36 natgeorg joined #gluster
20:37 bivak joined #gluster
20:37 anoopcs joined #gluster
20:38 sblanton_ joined #gluster
20:38 ctria joined #gluster
20:38 DV__ joined #gluster
20:38 al joined #gluster
20:38 XpineX joined #gluster
20:38 Ramereth|home joined #gluster
20:39 siel_ joined #gluster
20:39 Rydekull joined #gluster
20:40 ndevos_ joined #gluster
20:40 ndevos_ joined #gluster
20:40 rotbeard joined #gluster
20:41 jvandewege_ joined #gluster
20:41 Marqin_ joined #gluster
20:45 crashmag_ joined #gluster
20:48 virusuy joined #gluster
20:51 vovcia joined #gluster
20:51 glusterbot` joined #gluster
20:52 rp_ joined #gluster
20:53 jotun_ joined #gluster
20:53 trig_ joined #gluster
20:53 JustinCl1ft joined #gluster
20:54 hchiramm_ joined #gluster
20:54 hflai joined #gluster
20:54 neoice joined #gluster
20:54 n-st joined #gluster
20:54 csim_ joined #gluster
20:54 R0ok__ joined #gluster
20:54 mrErikss1n joined #gluster
20:54 javi404 joined #gluster
20:55 afics joined #gluster
20:55 klaxa joined #gluster
20:55 xavih joined #gluster
20:58 gothos_ joined #gluster
20:59 bcicen_ joined #gluster
21:00 CyrilPeponnet joined #gluster
21:00 R0ok_ joined #gluster
21:02 dblack joined #gluster
21:02 kbyrne joined #gluster
21:02 JoeJulian_ joined #gluster
21:02 semiosis_ joined #gluster
21:02 semiosis joined #gluster
21:03 nhayashi joined #gluster
21:05 jotun joined #gluster
21:05 Intensity joined #gluster
21:06 necrogami joined #gluster
21:11 trig joined #gluster
21:12 siel joined #gluster
21:12 bivak joined #gluster
21:13 lanning joined #gluster
21:13 Lee-- joined #gluster
21:14 JPaul joined #gluster
21:14 rehunted joined #gluster
21:15 uebera|| joined #gluster
21:15 purpleid1a joined #gluster
21:16 pjschmit1 joined #gluster
21:18 jotun_ joined #gluster
21:19 capri joined #gluster
21:20 mikedep3- joined #gluster
21:20 swebb joined #gluster
21:20 Champi joined #gluster
21:20 JamesToo joined #gluster
21:21 kenansulayman joined #gluster
21:21 joshin joined #gluster
21:21 joshin joined #gluster
21:21 JoeJulian joined #gluster
21:21 RobertLaptop joined #gluster
21:21 coreping joined #gluster
21:22 lkoranda joined #gluster
21:23 zerick joined #gluster
21:24 papamoose joined #gluster
21:24 virusuy joined #gluster
21:24 milkyline joined #gluster
21:27 foster_ joined #gluster
21:27 cyberbootje joined #gluster
21:27 Uguu joined #gluster
21:28 Marqin joined #gluster
21:28 mkzero_ joined #gluster
21:29 tru_tru_ joined #gluster
21:30 semiosis_ joined #gluster
21:33 lalatend1M joined #gluster
21:36 msvbhat_ joined #gluster
21:38 devilspgd_ joined #gluster
21:38 veonik_ joined #gluster
21:38 eryc joined #gluster
21:38 eryc joined #gluster
21:38 semiosis joined #gluster
21:38 Dave joined #gluster
21:41 scuttle` joined #gluster
21:47 argonius_ joined #gluster
21:48 ninkotech_ joined #gluster
21:48 nage joined #gluster
21:48 nage joined #gluster
21:50 atrius joined #gluster
21:51 [o__o] joined #gluster
21:52 badone joined #gluster
21:52 cuqa_ joined #gluster
21:57 bjornar joined #gluster
22:00 rotbeard joined #gluster
22:16 wkf joined #gluster
22:38 theron joined #gluster
23:09 lexi2 joined #gluster
23:14 B21956 joined #gluster
23:24 wkf joined #gluster
23:30 vovcia1 joined #gluster
23:31 m0zes_ joined #gluster
23:31 twx_ joined #gluster
23:32 fleducquede__ joined #gluster
23:33 dastar joined #gluster
23:33 CyrilPeponnet joined #gluster
23:34 diegows joined #gluster
23:35 masterzen joined #gluster
23:39 diegows joined #gluster
23:40 wkf joined #gluster
23:47 plarsen joined #gluster
23:57 natarej joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary